sybase performance tuning interview questions

Whether you are a developer creating web applications, a DBA or a tester involve in web testing, SQL skills are important both in database programming and database validation. Hence, we took up another challenge to come up with 25 SQL performance interview questions.

SQL performance tuning is a tough task and key in handling the increasing load on a web application. So the interviewer would certainly dig you in and check how well do you know about the subject.

Therefore, we’ve selectively picked 25 SQL performance tuning interview questions that could give you adequate coverage of the SQL performance tuning concept. Also, recently we’d published a set of 20 SQL interview questions on complex queries. Go through it as well to get fully prepared for your SQL interview.

3 Answers Every Performance Tuning Experts Always Seeking

Sybase Performance Tuning Interview Questions

Reporting: SQL Performance and Tuning

This is a list of some techniques used successfully at several different sites.

1) Verify tables have had “update statistics” on them ;

Verify tables have had “sp_recompile” on them.

2) Verify any declared variables have the same data

type as their corresponding columns – this is a common

3) Force index usage as follows, with a hint:

4) Use SET TABLE COUNT

Example: set table count 6

Then, compile the procedure, in the same session.

5) If temp tables are being used, put the temp table

creation statements in one procedure, and the

processing SQL in another procedure. This allows

the optimizer to form a query plan on the already

Example:

– Plan for growth. Assume the driver table doubled or tripled in size; would

the report still function ?

– Avoid dumb comparisons in the where clause, like

– use “WHERE EXISTS ( )” rather than “WHERE NOT EXISTS”

– use “!=” rather than “<>”

– use “IS NOT NULL” rather than “<>NULL”

– use “IS NULL” rather than “=NULL”

– avoid distinct if possible ; see cursor loop option below

– use meaningful names for temp tables … don’t use #temp (lame)

Single query reports are rare – usually they involve getting a simple list

– Don’t try to ‘shoehorn’ SQL into one statement. Shorter programs are

great for C or Perl applications, but this is not the case in SQL.

Think “Bigger is Better” (and more maintainable).

– Keep queries from using more than four tables if possible.

2) Cursor on driver table(s), with IF..THEN processing in loop

Using a cursor for complex reports almost always increases performance

when large tables and a lot of joins are involved.

– Keep cursor queries from using more than two tables if possible,

make sure this query performs well on its own.

– Try to have a unique key of some sort available within the tables involved.

Strange results have been known to occur when a cursor is scanning

rows that are exactly alike.

– Don’t use cursors for updating.

– Use IF statements for filtering results even further. In most cases:

A code construct like the one below is better than cramming the

logic in a where clause.

3) Set processing without cursors

This technique should be attempted when even a cursor construct fails to

Basically, the driver query is re-run with each iteration of the loop.

Sample, with cursor:

fetch cursor1 into @emp_id, @last_name, @salary

fetch cursor1 into @emp_id, @last_name, @salary

Sample, with set processing:

select @emp_id = 0, @loop = 1

Transaction Log Filling Up ?

If the transaction log is filling up, for tempdb or the main database, there

is likely something wrong with the report logic.

Things to check:

– Instead of repetitively updating each row, can the values be obtained

ahead of time, and then inserted with a single transaction ?

– Are the “joined” updates occuring on each row once ? When updating

using a join statement, make sure that the tables in question

are joined in a way that avoids duplicate rows. Try running the

SQL statement as a SELECT – check it out.

– Are you cramming 500,000 rows from a temp table into a db table ?

Try elminating the temp table.

– Create indexes on updated/inserted tables after the fact.

– Use “set rowcount” along with “waitfor delay” if log problems persist

*** A proper DBA will never extend the log segment, based on the needs of a

Transact SQL: numeric functions

Transact SQL: Finding duplicate rows in a table

This example finds cargo records with have duplicate destination ids.

8> where t2.dest_id = t1.dest_id ) > 1

Using Temporary Tables

Temp tables allow developers to create and scan tables within a stored procedure – and have the tables totally isolated from all other database connections. This is very valuable when results need to be processed several times within a loop, or when a complex result set is expected (like a crosstab). Note that temp table transactions are logged within tempdb (exception: select into create statements).

  • Temporary tables are created in the tempdb database. To create a temporary table, you must have create table permission in tempdb. create table permission defaults to the Database Owner.
  • The table exists until the current session ends or until its owner drops it using drop table.
  • Tables that are accessible only by the current Adaptive Server session or procedure
  • Temporary tables with names beginning with “#” that are created within stored procedures disappear when the procedure exits. A single procedure can:

  • Create a temporary table
  • Insert data into the table
  • Run queries on the table
  • Call other procedures that reference the table
  • Since the temporary table must exist in order to create procedures that reference it, here are the steps to follow:

  • Use create table to create the temporary table.
  • Create the procedures that access the temporary table, but do not create the procedure that creates the table.
  • Drop the temporary table.
  • Create the procedure that creates the table and calls the procedures created in step 2.
  • You can create temporary tables without the # prefix, using create table tempdb..tablename from inside a stored procedure. These tables do not disappear when the procedure completes, so they can be referenced by independent procedures. Follow the steps above to create these tables.

    Warning!

    Create temporary tables with the “tempdb..” prefix from inside a stored procedure only if you intend to share the table among users and sessions. Stored procedures that create and drop a temporary table should use the # prefix to avoid inadvertent sharing.

    General rules on temporary tables

    Temporary tables with names that begin with # are subject to the following restrictions:

  • You cannot create views on these tables.
  • You cannot associate triggers with these tables.
  • You cannot tell which session or procedure has created these tables.
  • These restrictions do not apply to shareable, temporary tables created in tempdb.

    Rules that apply to both types of temporary tables:

  • You can associate rules, defaults, and indexes with temporary tables. Indexes created on a temporary table disappear when the temporary table disappears.
  • System procedures such as sp_help work on temporary tables only if you invoke them from tempdb.
  • You cannot use user-defined datatypes in temporary tables unless the datatypes exist in tempdb; that is, unless the datatypes have been explicitly created in tempdb since the last time Adaptive Server was restarted.
  • You do not have to set the select into/bulkcopy option on to select into a temporary table.
  • What is difference between SQL & T-SQL?

    SQL- set of sqls are submitted individually to the database server.

    T-SQL- the batch program is written where in all commands are submitted to the server in a single go. usually batches are run overnight and and all inserts and updates happen and these batches are scheduled. where as sqls’ are to run separately…..

    all sqls’ are put in a file and schedule them called –t-sql .. besides it offers some other commands too.

    SQL is the Structured Query Language the ANSI/ISO Standard database language. SQL Server’s implementation of the language is called Transact-SQL (T-SQL).

    What is the difference between char and varchar data types?

    char is used for fixed length memory storage whereas varchar

    is used for variable lenght memory storage.

    Fox Example if we have char(10) then we can store 10 bit

    value in it but if we store only 6 bit value in it then rest

    of the 4 bit space is goes wasted. this limitation is

    In varchar if we use less space than define space the rest

    of space is not wasted.

    What is difference between User and Login in Sybase database ?

    A server login is a name with which you connect to the ASE server from a client application. This requires a password. Login names and (encrypted) passwords are stored in master..syslogins. Once you’ve logged into the server, you still don’t have any right to access any databases. By default, when doing a ‘use your_db_name’,you’ll get an error that this login is not a user in that database. A database user is what controls access to each individual database for a specific login. So, in order to let a real-life user (i.e. a person) access a particular database, he/she requires (i) a login to connect to ASE, and (ii) a database user entry in that particular database to allow access to this database. Note that, once you have access to a database, you’re still not allowed anything by default: you need permissions to select data from tables, for example. Unless that permission has been granted to ‘public’ (or to your specific database user) by the table owner, you won’t be allowed to query the table.

    For simplicity, the ‘guest’ user, and ‘user groups’ were omitted from this explanation. Also, ‘roles’ can have an impact on these access authorisations. See the ASE System Administration Guide for full details.

    Transation modes:

    chained mode – This is ANSI standard. You are always in a transaction. You never have to issue BEGIN TRANs because it is issued implicitly as soon as you connect, and then as soon as you finish previous transaction. So as soon as you issue explicit COMMIT TRAN or ROLLBACK TRAN – you will get into a new transaction. It looks strange – you only close transactions with commit or rollback – and never have to open it, because it is always already opened for you.

    unchained mode (default for Sybase) – Each individual statement is atomic. If you want to put several statements in one transactions – you have to explicitly issue BEGIN TRAN and at the end do COMMIT or ROLLBACK.

    Stored procedures remember in which mode they were compiled – and will not run in a different mode – unless you define them with “anymode”.

    Read – put shared lock, update or insert – exclusive lock.

    Table locks may be a result of lock promotion, but more often they happen because of your SQL / indexes. Normally table read puts a shared lock. But if you have an UPDATE or a DELETE with a WHERE clause which has a table scan – you will get an exclusive table lock without any lock promotion, regardless of locking scheme.

    Switching to row level locking (DATAROWS locking scheme) may not solve your problem and is very expensive: your data takes up more space, UPDATES start causing a lot more fragmentation, and you need A LOT more locks. Lock promotion works by number of locks used, and since DATAROWS uses considerably more locks, you’ll hit promotion threshold much sooner. Also if you run out of locks – you will also get a promotion to a table lock. Also, if you are committing more rows at a time than the HWM (high lock promotion threshold) – then you will still get lock promotion even with datarows locking scheme.

    There are 3 locking schemes: – ALLPAGES (data and indexes) – DATAPAGES (only data) – DATAROWS (only data, row-level locking)

    Use sp_lock and sp_objectstats to see if multiple connections/users attempt locks on rows on the same page. Use sp_sysmon to detect lock promotion.

    Lock promotion thresholds : LWM = 200 – never attempt a promotion below 200 locks HWM = 200 – always attempt promotion once 200 are reached (HLW – High WaterMark, LWM – Low WaterMark) Number of locks = 35,000

    Note: Switching to DATAROWS might help your blocking, but maybe the problem isn’t that people are trying to read the same row, or even different rows on the same page. It might be that they’re blocking on data vs index access. DATAPAGES is a good first step to try since it eliminates index locking and usually this is all you need. It has most of the benefits of DATAROWS without the extreme overhead. It’s always a good idea to try DATAPAGES first.

    Also, the solution to avoid table locks may be simply to increase the lock promotion thresholds.

    Note: changing from ALLPAGES to DATAPAGES or DATAROWS is an expensive operation. Switching between DATAPAGES and DATAROWS is very easy and fast.

    Note: if a WHERE clause has local variables – optimizer will not use index, thus causing a table scan.

    Example:Modifying locking scheme:

    alter table authors lock datarows

    Adaptive Server global variables

    Global variables are system-defined variables updated by Adaptive Server while the system is running. Some global variables are session-specific, while others are server instance-specific. For example, @@error contains the last error number generated by the system for a given user connection.

    @@connections Returns the number of user logins attempted.
    @@error Returns the error number most recently generated by the system.
    @@identity Returns the most recently generated IDENTITY column value.
    @@isolation Returns the value of the session-specific isolation level (0, 1, or 3) of the current Transact-SQL program.
    @@servername Returns the name of Adaptive Server.
    @@spid Returns the server process ID of the current process.
    @@sqlstatus Returns status information (warning exceptions) resulting from the execution of a fetch statement.
    @@trancount Returns the nesting level of transactions in the current user session.
    @@version Returns the date, version string, and so on of the current release of Adaptive Server.

    OLTP systems are designed for transaction processing, such as online banking or retail sales. They are typically characterized by a large number of short transactions, with a high degree of concurrency. OLAP systems are designed for analytical processing, such as data warehousing. They are typically characterized by a smaller number of longer transactions, with a low degree of concurrency.

    SAP Business Objects Explorer is a data visualization tool that allows users to explore and interact with data in a variety of ways. It provides a number of features and options for customizing the way data is displayed, and it is also possible to integrate it with other SAP Business Objects products for even more functionality.

    Yes, it is possible to run Sybase on personal computers such as Windows or MacOS. In order to do so, you will need to install the Sybase software on your computer. Once the software is installed, you will be able to connect to a Sybase database and run queries against it.

    Sybase has a number of big data solutions that integrate with Hadoop or Spark, depending on the needs of the customer. One option is the Sybase Big Data Appliance, which is a pre-configured system that includes all of the hardware and software needed to get started with big data analytics. Another option is the Sybase Big Data Integration Suite, which includes a number of tools for data integration, data quality, and data governance.

    A data warehouse is a database used for reporting and data analysis. It is a central repository of data that can be used to answer business questions. Data warehouses are usually created by extracting data from multiple sources, such as transactional databases, and then transforming it into a format that is easy to analyze.

    SQL Performance Interview Questions and Answers.

    Q:-1. What is SQL Query Optimization?

    Ans. Query Optimization is the process of writing the query in a way so that it could execute quickly. It is a significant step for any standard application.

    Q:-2. What are some tips to improve the performance of SQL queries?

    Ans. Optimizing SQL queries can bring substantial positive impact on the performance. It also depends on the level of RDBMS knowledge you have. Let’s now go over some of the tips for tuning SQL queries.

    1. Prefer to use views and stored procedures in spite of writing long queries. It’ll also help in minimizing network load.

    2. It’s better to introduce constraints instead of triggers. They are more efficient than triggers and can increase performance.

    3. Make use of table-level variables instead of temporary tables.

    4. The UNION ALL clause responds faster than UNION. It doesn’t look for duplicate rows whereas the UNION statement does that regardless of whether they exist or not.

    5. Prevent the usage of DISTINCT and HAVING clauses.

    6. Avoid excessive use of SQL cursors.

    7. Make use of SET NOCOUNT ON clause while building stored procedures. It represents the rows affected by a T-SQL statement. It would lead to reduced network traffic.

    8. It’s a good practice to return the required column instead of all the columns of a table.

    9. Prefer not to use complex joins and avoid disproportionate use of triggers.

    10. Create indexes for tables and adhere to the standards.

    Q:-3. What are the bottlenecks that affect the performance of a Database?

    Ans. In a web application, the database tier can prove to be a critical bottleneck in achieving the last mile of scalability. If a database has performance leakage, that can become a bottleneck and likely to cause the issue. Some of the common performance issues are as follows.

    1. Abnormal CPU usage is the most obvious performance bottleneck. However, you can fix it by extending CPU units or replacing with an advanced CPU. It may look like a simple issue but abnormal CPU usage can lead to other problems.

    2. Low memory is the next most common bottleneck. If the server isn’t able to manage the peak load, then it poses a big question mark on the performance. For any application, memory is very critical to perform as it’s way faster than the persistent memory. Also, when the RAM goes down to a specific threshold, then the OS turns to utilize the swap memory. But it makes the application to run very slow.

    You can resolve it by expanding the physical RAM, but it won’t solve memory leaks if there is any. In such a case, you need to profile the application to identify the potential leaks within its code.

    3. Too much dependency on external storage like SATA disk could also come as a bottleneck. Its impact gets visible while writing large data to the disk. If output operations are very slow, then it is a clear indication an issue becoming the bottleneck.

    In such cases, you need to do scaling. Replace the existing drive with a faster one. Try upgrading to an SSD hard drive or something similar.

    Q:-4. What are the steps involved in improving the SQL performance?

    Ans.

    Discover – First of all, find out the areas of improvement. Explore tools like Profiler, Query execution plans, SQL tuning advisor, dynamic views, and custom stored procedures.

    Review – Brainstorm the data available to isolate the main issues.

    Propose – Here is a standard approach one can adapt to boost the performance. However, you can customize it further to maximize the benefits.

    1. Identify fields and create indexes. 2. Modify large queries to make use of indexes created. 3. Refresh table and views and update statistics. 4. Reset existing indexes and remove unused ones. 5. Look for dead blocks and remove them.

    Validate – Test the SQL performance tuning approach. Monitor the progress at a regular interval. Also, track if there is any adverse impact on other parts of the application.

    Publish – Now, it’s time to share the working solution with everyone in the team. Let them know all the best practices so that they can use it with ease.

    Q:-5. What is a explain plan?

    Ans. It’s a term used in Oracle. And it is a type of SQL clause in Oracle which displays the execution plan that its optimizer plans for executing the SELECT/UPDATE/INSERT/DELETE statements.

    Q:-6. How do you analyze an explain plan?

    Ans. While analyzing the explain plan, check the following areas.

    1. Driving Table 2. Join Order 3. Join Method 4. Unintentional cartesian product 5. Nested loops, merge sort, and hash join 6. Full Table Scan 7. Unused indexes 8. Access paths

    Q:-7. How do you tune a query using the explain plan?

    Ans. The explain plan shows a complete output of the query costs including each subquery. The cost is directly proportional to the query execution time. The plan also depicts the problem in queries or sub-queries while fetching data from the query.

    Q:-8. What is Summary advisor and what type of information does it provide?

    Ans. Summary advisor is a tool for filtering and materializing the views. It can help in elevating the SQL performance by selecting the proper set of materialized views for a given workload. And it also provides data about the Materialized view recommendations.

    Q:-9. What could most likely cause a SQL query to run as slow as 5 minutes?

    Ans. Most probably, a sudden surge in the volume of data in a particular table could slow down the output of a SQL query. So collect the required stats for the target table. Also, monitor any change in the DB level or within the underlying object level.

    Q:-10. What is a Latch Free Event? And when does it occur? Alos, how does the system handles it?

    Ans. In Oracle, Latch Free wait event occurs when a session requires a latch, attempts to get it but fails because someone else has it.

    So it sleeps with a wait eying for the latch to get free, wakes up and tries again. The time duration for it was inactive is the wait time for Latch Free. Also, there is no ordered queue for the waiters on a latch, so the one who comes first gets it.

    Q:-11. What is Proactive tuning and Reactive tuning?

    Ans.

    Proactive tuning – The architect or the DBA determines which combination of system resources and available Oracle features fulfill the criteria during Design and Development.

    Reactive tuning – It is the bottom-up approach to discover and eliminate the bottlenecks. The objective is to make Oracle respond faster.

    Q:-12. What are Rule-based Optimizer and Cost-based Optimizer?

    Ans. Oracle determines how to get the required data for processing a valid SQL statement. It uses one of following two methods to take this decision.

    Rule-based Optimizer – When a server doesn’t have internal statistics supporting the objects referenced by the statement, then the RBO method gets preference. However, Oracle will deprecate this method in the future releases.

    Cost-based Optimizer – When there is an abundance of the internal statistics, the CBO gets the precedence. It verifies several possible execution plans and chooses the one with the lowest cost based on the system resources.

    Q:-13. What are several SQL performance tuning enhancements in Oracle?

    Ans. Oracle provides many performance enhancements, some of them are:

    1. Automatic Performance Diagnostic and Tuning Features 2. Automatic Shared Memory Management – It gives Oracle control of allocating memory within the SGA. 3. Wait-model improvements – A number of views have come to boost the Wait-model. 4. Automatic Optimizer Statistics Collection – Collects optimizer statistics using a scheduled job called GATHER_STATS_JOB. 5. Dynamic Sampling – Enables the server to enhance performance. 6. CPU Costing – It’s the basic cost model for the optimizer (CPU+I/O), with the cost unit as time optimizer notifies. 7. Rule Based Optimizer Obsolescence – No more used. 8. Tracing Enhancements – End to End tracing which allows a client process to be identified via the Client Identifier instead of using the typical Session ID.

    Q:-14. What are the tuning indicators Oracle proposes?

    Ans. The following high-level tuning indicators are available to establish if a database is experiencing bottlenecks or not:

    1. Buffer Cache Hit Ratio.

    It uses the following formula.

    Hit Ratio = (Logical Reads – Physical Reads) / Logical Reads

    Action: Advance the DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) to improve the hit ratio.

    2. Library Cache Hit Ratio.

    Action: Advance the SHARED_POOL_SIZE to increase the hit ratio.

    Q:-15. What do you check first if there are multiple fragments in the SYSTEM tablespace?

    Ans. First of all, check if the users don’t have the SYSTEM tablespace as their TEMPORARY or DEFAULT tablespace assignment by verifying the DBA_USERS view.

    Q:-16. When would you add more Copy Latches? What are the parameters that control the Copy Latches?

    Ans. If there is excessive contention for the Copy Latches, check from the “redo copy” latch hit ratio.

    In such a case, add more Copy Latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to double the number of CPUs available.

    Q:-17. How do you confirm if a tablespace has disproportionate fragmentation?

    Ans. You can confirm it by checking the output of SELECT against the dba_free_space table. If it points that the no. of a tablespaces extents is more than the count of its data files, then it proves excessive fragmentation.

    Q:-18. What can you do to optimize the %XYZ% queries?

    Ans. First of all, set the optimizer to scan all the entries from the index instead of the table. You can achieve it by specifying hints.

    Please note that crawling the smaller index takes less time than to scan the entire table.

    Q:-19. Where does the I/O statistics per table exist in Oracle?

    Ans. There is a report known as UTLESTAT which displays the I/O per tablespace. But it doesn’t help to find the table which has the most I/O.

    Q:-20. When is the right time to rebuild an index?

    Ans. First of all, select the target index and run the ‘ANALYZE INDEX VALIDATE STRUCTURE’ command. Every time you run it, a single row will get created in the INDEX_STATS view.

    But the row gets overwritten the next time you run the ANALYZE INDEX command. So better move the contents of the view to a local table. Thereafter, analyze the ratio of ‘DEL_LF_ROWS’ to ‘LF_ROWS’ and see if you need to rebuild the index.

    Q:-21. What exactly would you do to check the performance issue of SQL queries?

    Ans. Mostly the database isn’t slow, but it’s the worker session which drags the performance. And it’s the abnormal session accesses which cause the bottlenecks.

    1. Review the events that are in wait or listening mode. 2. Hunt down the locked objects in a particular session. 3. Check if the SQL query is pointing to the right index or not. 4. Launch SQL Tuning Advisor and analyze the target SQL_ID for making any performance recommendation. 5. Run the “free” command to check the RAM usage. Also, use TOP command to identify any process hogging the CPU.

    Q:-22. What is the information you get from the STATSPACK Report?

    Ans. We can get the following statistics from the STATSPACK report.

    1. WAIT notifiers 2. Load profile 3. Instance Efficiency Hit Ratio 4. Latch Waits 5. Top SQL 6. Instance Action 7. File I/O and Segment Stats 8. Memory allocation 9. Buffer Waits

    Q:-23. What are the factors to consider for creating Index on Table? Also, How to select a column for Index?

    Ans. Creation of index depends on the following factors.

    1. Size of table, 2. Volume of data

    If Table size is large and we need a smaller report, then it’s better to create Index.

    Regarding the column to be used for Index, as per the business rule, you should use a primary key or a unique key for creating a unique index.

    Q:-24. What is the main difference between Redo, Rollback, and Undo?

    Ans.

    Redo – Log that records all changes made to data, including both uncommitted and committed changes.

    Rollback – Segments to store the previous state of data before the changes.

    Undo – Helpful in building a read consistent view of data. The data gets stored in the undo tablespace.

    Q:-25. How do you identify the shared memory and semaphores of a particular DB instance if there are running multiple servers?

    Ans. Set the following parameters to distinguish between the in-memory resources of a DB instance.

    1. SETMYPID 2. IPC 3. TRACEFILE_NAME

    Use the ORADEBUG command to explore their underlying options.

    FAQ

    How will you performance tune a query in Sybase database?

    Sybase Performance Tuning Tips
    1. Specify Growth Projection. If you’re using ASE 15, it’s for a reason. …
    2. Tuning Indexes. …
    3. Mind Foreign Key Constraints. …
    4. Monitor CPU Usage. …
    5. Avoid Using Co-Related Subqueries. …
    6. Collect Performance Data. …
    7. Lock, Lock, Lock. …
    8. Process in Several Partitions.

    How would you optimize SQL query interview question?

    Let’s now go over some of the tips for tuning SQL queries.
    1. Prefer to use views and stored procedures in spite of writing long queries. …
    2. It’s better to introduce constraints instead of triggers. …
    3. Make use of table-level variables instead of temporary tables.
    4. The UNION ALL clause responds faster than UNION.

    What feature of Sybase makes it best suited for investment banks?

    By providing separate products for datawarehousing and other database-linked functions, Sybase has avoided the bloat of its larger competitors, and it can offer better performance and lower total cost of ownership in transactional environments.

    What could most likely cause a SQL query to run as slow as 5 minutes?

    What could most likely cause a SQL query to run as slow as 5 minutes? Most probably, a sudden surge in the volume of data in a particular table could slow down the output of a SQL query. So collect the required stats for the target table. Also, monitor any change in the DB level or within the underlying object level.

    Related Posts

    Leave a Reply

    Your email address will not be published. Required fields are marked *