Search Results

Search found 8705 results on 349 pages for 'perl scripts'.

Page 62/349 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • yum error when installing memcached

    - by Jack
    Hi, trying to install memcached with "yum install memcached" and i'm getting all these errors which I have no idea how to solve. Setting up Install Process Resolving Dependencies -- Running transaction check --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Processing Dependency: libevent-1.1a.so.1()(64bit) for package: memcached -- Running transaction check --- Package compat-libevent-11a.x86_64 0:3.2.1-1.el5.rf set to be updated --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Finished Dependency Resolution memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Socket) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Handle) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(YAML) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(Term::ReadKey) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) Packages skipped because of dependency problems: compat-libevent-11a-3.2.1-1.el5.rf.x86_64 from rpmforge memcached-1.4.5-1.el5.rf.x86_64 from rpmforge The perl modules that its complaining about are already installed. Any ideas?

    Read the article

  • Server setup scripts, patches and migrations

    - by Ben Swinburne
    I have written some scripts which I use to configure various servers in a uniform way. Each time I deploy a server I run the relevant scripts so that I know they're all configured the same. I then have some patch scripts, which are changes to the originals which I can then run to ensure that modifications to the original set up can be run on each server. E.g. disable.sh - Disable SELinux etc to ensure other scripts all run correctly general.sh - Jailkit, AV, Repos, RKHunter, security tweaks, uninstall unused bits etc web.sh - Installs and configures Apache2 001_update_nr_licence_key.sh - Update a licence key for a piece of software which has changed since its install in general.sh I can run the first 3 without a problem, but when it comes to running patches I am a bit stuck. Is there a sensible way of doing these with some software? My current thought is write to a log file the role of the server be it web or db for example and then note the name of the patch which has run. It could then iterate through a folder to find all patches for that role which it has not yet run and execute them. This seems a bit long winded however. Could someone advise me as to the best way I can keep my servers uniform?

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Signal Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Signal Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Signal Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the Signalwait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the Signal wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the Signal wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Single Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28

    - by pinaldave
    In this post, let’s delve a bit more in depth regarding wait stats. The very first question: when do the wait stats occur? Here is the simple answer. When SQL Server is executing any task, and if for any reason it has to wait for resources to execute the task, this wait is recorded by SQL Server with the reason for the delay. Later on we can analyze these wait stats to understand the reason the task was delayed and maybe we can eliminate the wait for SQL Server. It is not always possible to remove the wait type 100%, but there are few suggestions that can help. Before we continue learning about wait types and wait stats, we need to understand three important milestones of the query life-cycle. Running - a query which is being executed on a CPU is called a running query. This query is responsible for CPU time. Runnable – a query which is ready to execute and waiting for its turn to run is called a runnable query. This query is responsible for Single Wait time. (In other words, the query is ready to run but CPU is servicing another query). Suspended – a query which is waiting due to any reason (to know the reason, we are learning wait stats) to be converted to runnable is suspended query. This query is responsible for wait time. (In other words, this is the time we are trying to reduce). In simple words, query execution time is a summation of the query Executing CPU Time (Running) + Query Wait Time (Suspended) + Query Single Wait Time (Runnable). Again, it may be possible a query goes to all these stats multiple times. Let us try to understand the whole thing with a simple analogy of a taxi and a passenger. Two friends, Tom and Danny, go to the mall together. When they leave the mall, they decide to take a taxi. Tom and Danny both stand in the line waiting for their turn to get into the taxi. This is the Signal Wait Time as they are ready to get into the taxi but the taxis are currently serving other customer and they have to wait for their turn. In other word they are in a runnable state. Now when it is their turn to get into the taxi, the taxi driver informs them he does not take credit cards and only cash is accepted. Neither Tom nor Danny have enough cash, they both cannot get into the vehicle. Tom waits outside in the queue and Danny goes to ATM to fetch the cash. During this time the taxi cannot wait, they have to let other passengers get into the taxi. As Tom and Danny both are outside in the queue, this is the Query Wait Time and they are in the suspended state. They cannot do anything till they get the cash. Once Danny gets the cash, they are both standing in the line again, creating one more Single Wait Time. This time when their turn comes they can pay the taxi driver in cash and reach their destination. The time taken for the taxi to get from the mall to the destination is running time (CPU time) and the taxi is running. I hope this analogy is bit clear with the wait stats. You can check the single wait stats using following query of Glenn Berry. -- Signal Waits for instance SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits], CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits] FROM sys.dm_os_wait_stats OPTION (RECOMPILE); Higher the single wait stats are not good for the system. Very high value indicates CPU pressure. In my experience, when systems are running smooth and without any glitch the single wait stat is lower than 20%. Again, this number can be debated (and it is from my experience and is not documented anywhere). In other words, lower is better and higher is not good for the system. In future articles we will discuss in detail the various wait types and wait stats and their resolution. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Spatial Database Queries – What About BLOB – T-SQL Tuesday #006

    - by pinaldave
    Michael Coles is one of the most interesting book authors I have ever met. He has a flair of writing complex stuff in a simple language. There are a very few people like that.  I really enjoyed reading his recent book, Expert SQL Server 2008 Encryption. I strongly suggest taking a look at it. This blog is written in response to T-SQL Tuesday #006: “What About BLOB? by Michael Coles. Spatial Database is my favorite subject. Since I did my TechEd India 2010 presentation, I have enjoyed this subject a lot. Before I continue this blog post, there are a few other blog posts, so I suggest you read them.  To help build the environment run the queries, I am going to present them in this single blog post. SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spatial Indexing This blog post explains the basics of Spatial Database and also provides a good introduction to Indexing concept. SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database This blog post will enable you with how to load the shape file into database. SQL SERVER – Spatial Database Definition and Research Documents This blog post links to the white paper about Spatial Database written by Microsoft experts. SQL SERVER – Introduction to Spatial Coordinate Systems: Flat Maps for a Round Planet This blog post links to the white paper explaining coordinate system, as written by Microsoft experts. After reading the above listed blog posts, I am very confident that you are ready to run the following script. Once you create a database using the World Shapefile, as mentioned in the second link above,you can display the image of India just like the following. Please note that this is not an accurate political map. The boundary of this map has many errors and it is just a representation. You can run the following query to generate the map of India from the database spatial which you have created after following the instructions here. USE Spatial GO -- India Map SELECT [CountryName] ,[BorderAsGeometry] ,[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now, let us find the longitude and latitude of the two major IT cities of India, Hyderabad and Bangalore. I find their values as the following: the values of longitude-latitude for Bangalore is 77.5833300000 13.0000000000; for Hyderabad, longitude-latitude is 78.4675900000 17.4531200000. Now, let us try to put these values on the India Map and see their location. -- Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(20000); -- Hyderabad DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(20000); -- Bangalore and Hyderabad on Map of India SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now let us quickly draw a straight line between them. DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(10000); DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(10000); DECLARE @GeoLocation2 GEOGRAPHY SET @GeoLocation2 = GEOGRAPHY::STGeomFromText('LINESTRING(78.4675900000 17.4531200000, 77.5833300000 13.0000000000)',4326) SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I1 WHERE I1.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '' name, @GeoLocation2 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Let us use the distance function of the spatial database and find the straight line distance between this two cities. -- Distance Between Hyderabad and Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326) DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326) SELECT @GeoLocation.STDistance(@GeoLocation1)/1000 'KM'; GO The result of above query is as displayed in following image. As per SQL Server, the distance between these two cities is 501 KM, but according to what I know, the distance between those two cities is around 562 KM by road. However, please note that roads are not straight and they have lots of turns, whereas this is a straight-line distance. What would be more accurate is the distance between these two cities by air travel. When we look at the air travel distance between Bangalore and Hyderabad, the total distance covered is 495 KM, which is very close to what SQL Server has estimated, which is 501 KM. Bravo! SQL Server has accurately provided the distance between two of the cities. SQL Server Spatial Database can be very useful simply because it is very easy to use, as demonstrated above. I appreciate your comments, so let me know what your thoughts and opinions about this are. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database

    Read the article

  • SQL SERVER – Solution to Puzzle – Simulate LEAD() and LAG() without Using SQL Server 2012 Analytic Function

    - by pinaldave
    Earlier I wrote a series on SQL Server Analytic Functions of SQL Server 2012. During the series to keep the learning maximum and having fun, we had few puzzles. One of the puzzle was simulating LEAD() and LAG() without using SQL Server 2012 Analytic Function. Please read the puzzle here first before reading the solution : Write T-SQL Self Join Without Using LEAD and LAG. When I was originally wrote the puzzle I had done small blunder and the question was a bit confusing which I corrected later on but wrote a follow up blog post on over here where I describe the give-away. Quick Recap: Generate following results without using SQL Server 2012 analytic functions. I had received so many valid answers. Some answers were similar to other and some were very innovative. Some answers were very adaptive and some did not work when I changed where condition. After selecting all the valid answer, I put them in table and ran RANDOM function on the same and selected winners. Here are the valid answers. No Joins and No Analytic Functions Excellent Solution by Geri Reshef – Winner of SQL Server Interview Questions and Answers (India | USA) WITH T1 AS (SELECT Row_Number() OVER(ORDER BY SalesOrderDetailID) N, s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663)) SELECT SalesOrderID,SalesOrderDetailID,OrderQty, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY N/2) END LeadVal, CASE WHEN N%2=1 THEN MAX(CASE WHEN N%2=0 THEN SalesOrderDetailID END) OVER (Partition BY N/2) ELSE MAX(CASE WHEN N%2=1 THEN SalesOrderDetailID END) OVER (Partition BY (N+1)/2) END LagVal FROM T1 ORDER BY SalesOrderID, SalesOrderDetailID, OrderQty; GO No Analytic Function and Early Bird Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription -- a query to emulate LEAD() and LAG() ;WITH s AS ( SELECT 1 AS ldOffset, -- equiv to 2nd param of LEAD 1 AS lgOffset, -- equiv to 2nd param of LAG NULL AS ldDefVal, -- equiv to 3rd param of LEAD NULL AS lgDefVal, -- equiv to 3rd param of LAG ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLd.SalesOrderDetailID, s.ldDefVal) AS LeadValue, ISNULL( sLg.SalesOrderDetailID, s.lgDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLd ON s.row = sLd.row - s.ldOffset LEFT OUTER JOIN s AS sLg ON s.row = sLg.row + s.lgOffset ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and Partition By Excellent Solution by DHall – Winner of Pluralsight 30 days Subscription /* a query to emulate LEAD() and LAG() */ ;WITH s AS ( SELECT 1 AS LeadOffset, /* equiv to 2nd param of LEAD */ 1 AS LagOffset, /* equiv to 2nd param of LAG */ NULL AS LeadDefVal, /* equiv to 3rd param of LEAD */ NULL AS LagDefVal, /* equiv to 3rd param of LAG */ /* Try changing the values of the 4 integer values above to see their effect on the results */ /* The values given above of 0, 0, null and null behave the same as the default 2nd and 3rd parameters to LEAD() and LAG() */ ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS row, SalesOrderID, SalesOrderDetailID, OrderQty FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty, ISNULL( sLead.SalesOrderDetailID, s.LeadDefVal) AS LeadValue, ISNULL( sLag.SalesOrderDetailID, s.LagDefVal) AS LagValue FROM s LEFT OUTER JOIN s AS sLead ON s.row = sLead.row - s.LeadOffset /* Try commenting out this next line when LeadOffset != 0 */ AND s.SalesOrderID = sLead.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LEAD() function */ LEFT OUTER JOIN s AS sLag ON s.row = sLag.row + s.LagOffset /* Try commenting out this next line when LagOffset != 0 */ AND s.SalesOrderID = sLag.SalesOrderID /* The additional join criteria on SalesOrderID above is equivalent to PARTITION BY SalesOrderID in the OVER clause of the LAG() function */ ORDER BY s.SalesOrderID, s.SalesOrderDetailID, s.OrderQty No Analytic Function and CTE Usage Excellent Solution by Pravin Patel - Winner of SQL Server Interview Questions and Answers (India | USA) --CTE based solution ; WITH cteMain AS ( SELECT SalesOrderID, SalesOrderDetailID, OrderQty, ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS sn FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ) SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, sLead.SalesOrderDetailID AS leadvalue, sLeg.SalesOrderDetailID AS leagvalue FROM cteMain AS m LEFT OUTER JOIN cteMain AS sLead ON sLead.sn = m.sn+1 LEFT OUTER JOIN cteMain AS sLeg ON sLeg.sn = m.sn-1 ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty No Analytic Function and Co-Related Subquery Usage Excellent Solution by Pravin Patel – Winner of SQL Server Interview Questions and Answers (India | USA) -- Co-Related subquery SELECT m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty, ( SELECT MIN(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID >= m.SalesOrderID AND l.SalesOrderDetailID > m.SalesOrderDetailID ) AS lead, ( SELECT MAX(SalesOrderDetailID) FROM Sales.SalesOrderDetail AS l WHERE l.SalesOrderID IN (43670, 43669, 43667, 43663) AND l.SalesOrderID <= m.SalesOrderID AND l.SalesOrderDetailID < m.SalesOrderDetailID ) AS leag FROM Sales.SalesOrderDetail AS m WHERE m.SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY m.SalesOrderID, m.SalesOrderDetailID, m.OrderQty This was one of the most interesting Puzzle on this blog. Giveaway Winners will get following giveaways. Geri Reshef and Pravin Patel SQL Server Interview Questions and Answers (India | USA) DHall Pluralsight 30 days Subscription Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Solution – Puzzle – SELECT * vs SELECT COUNT(*)

    - by pinaldave
    Earlier I have published Puzzle Why SELECT * throws an error but SELECT COUNT(*) does not. This question have received many interesting comments. Let us go over few of the answers, which are valid. Before I start the same, let me acknowledge Rob Farley who has not only answered correctly very first but also started interesting conversation in the same thread. The usual question will be what is the right answer. I would like to point to official Microsoft Connect Items which discusses the same. RGarvao https://connect.microsoft.com/SQLServer/feedback/details/671475/select-test-where-exists-select tiberiu utan http://connect.microsoft.com/SQLServer/feedback/details/338532/count-returns-a-value-1 Rob Farley count(*) is about counting rows, not a particular column. It doesn’t even look to see what columns are available, it’ll just count the rows, which in the case of a missing FROM clause, is 1. “select *” is designed to return columns, and therefore barfs if there are none available. Even more odd is this one: select ‘blah’ where exists (select *) You might be surprised at the results… Koushik The engine performs a “Constant scan” for Count(*) where as in the case of “SELECT *” the engine is trying to perform either Index/Cluster/Table scans. amikolaj When you query ‘select * from sometable’, SQL replaces * with the current schema of that table. With out a source for the schema, SQL throws an error. so when you query ‘select count(*)’, you are counting the one row. * is just a constant to SQL here. Check out the execution plan. Like the description states – ‘Scan an internal table of constants.’ You could do ‘select COUNT(‘my name is adam and this is my answer’)’ and get the same answer. Netra Acharya SELECT * Here, * represents all columns from a table. So it always looks for a table (As we know, there should be FROM clause before specifying table name). So, it throws an error whenever this condition is not satisfied. SELECT COUNT(*) Here, COUNT is a Function. So it is not mandetory to provide a table. Check it out this: DECLARE @cnt INT SET @cnt = COUNT(*) SELECT @cnt SET @cnt = COUNT(‘x’) SELECT @cnt Naveen Select 1 / Select ‘*’ will return 1/* as expected. Select Count(1)/Count(*) will return the count of result set of select statement. Count(1)/Count(*) will have one 1/* for each row in the result set of select statement. Select 1 or Select ‘*’ result set will contain only 1 result. so count is 1. Where as “Select *” is a sysntax which expects the table or equauivalent to table (table functions, etc..). It is like compilation error for that query. Ramesh Hi Friends, Count is an aggregate function and it expects the rows (list of records) for a specified single column or whole rows for *. So, when we use ‘select *’ it definitely give and error because ‘*’ is meant to have all the fields but there is not any table and without table it can only raise an error. So, in the case of ‘Select Count(*)’, there will be an error as a record in the count function so you will get the result as ’1'. Try using : Select COUNT(‘RAMESH’) and think there is an error ‘Must specify table to select from.’ in place of ‘RAMESH’ Pinal : If i am wrong then please clarify this. Sachin Nandanwar Any aggregate function expects a constant or a column name as an expression. DO NOT be confused with * in an aggregate function.The aggregate function does not treat it as a column name or a set of column names but a constant value, as * is a key word in SQL. You can replace any value instead of * for the COUNT function.Ex Select COUNT(5) will result as 1. The error resulting from select * is obvious it expects an object where it can extract the result set. I sincerely thank you all for wonderful conversation, I personally enjoyed it and I am sure all of you have the same feeling. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Screen (command-line program) bug?

    - by VioletCrime
    fired up my Minecraft server again after about a year off. My server used to run 11.04, which has since been upgraded to 12.04; I lost my management scripts in the upgrade (thought I'd backed up the user's home directory), but whatever, I enjoy developing stuff like that anyways. However, this time around, I'm running into issues. I start the Minecraft server using a detached screen, however the script is unable to 'stuff' commands into the screen instance until I attach to the screen then detach again? Once I do that, I can stuff anything I want into the server's terminal using the -X option, until I stop/start the server again, then I have to reattach and detach in order to restore functionality? Here's the manager script: #!/bin/bash #Name of the screen housing the server (set with the -S flag at startup) SCREENNAME=minecraft1 #Name of the folder housing the Minecraft world FOLDERNAME=world1 startServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Cannot start Minecraft server; it is already running!" else screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server started; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } stopServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (halt) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } stopServerNow (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 5-second warning." screen -S $SCREENNAME -X stuff "/say EMERGENCY SHUTDOWN! 5 seconds to halt." screen -S $SCREENNAME -X stuff $'\015' sleep 5 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' else echo "Server is not running; nothing to stop." fi; } restartServer (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in one minute." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down (restart) in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' sleep 2 startServer else echo "Cannot restart server: it isn't running." fi; } #In order for this function to work, a directory 'backup/$FOLDERNAME' must exist in the same #directory that '$FOLDERNAME' resides backupWorld (){ if screen -list | grep "$SCREENNAME" > /dev/null then echo "Server is running. Giving a 1-minute warning." screen -S $SCREENNAME -X stuff "/say Shutting down (backup) in one minute." screen -S $SCREENNAME -X stuff $'\015' screen -S $SCREENNAME -X stuff "/say Server should be down for no more than a few seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 45 screen -S $SCREENNAME -X stuff "/say Shutting down for backup in 15 seconds." screen -S $SCREENNAME -X stuff $'\015' sleep 15 screen -S $SCREENNAME -X stuff "/stop" screen -S $SCREENNAME -X stuff $'\015' fi sleep 2 if screen -list | grep $SCREENNAME > /dev/null then echo "Server is still running? Error." else cd .. tar -czvf backup0.tar.gz $FOLDERNAME mv backup0.tar.gz backup/$FOLDERNAME cd backup/$FOLDERNAME rm backup10.tar.gz mv backup9.tar.gz backup10.tar.gz mv backup8.tar.gz backup9.tar.gz mv backup7.tar.gz backup8.tar.gz mv backup6.tar.gz backup7.tar.gz mv backup5.tar.gz backup6.tar.gz mv backup4.tar.gz backup5.tar.gz mv backup3.tar.gz backup4.tar.gz mv backup2.tar.gz backup3.tar.gz mv backup1.tar.gz backup2.tar.gz mv backup0.tar.gz backup1.tar.gz cd ../../$FOLDERNAME screen -dmS $SCREENNAME java -Xmx1024M -Xms1024M -jar minecraft_server.jar; sleep 2 if screen -list | grep "$SCREENNAME" > /dev/null then echo "Minecraft server restarted; happy mining!" else echo "ERROR: Minecraft server failed to start!" fi fi; } printCommands (){ echo echo "$0 usage:" echo echo "Start : Starts the server on a detached screen." echo "Stop : Stop the server; includes a 1-minute warning." echo "StopNOW : Stops the server with only a 5-second warning." echo "Restart : Stops the server and starts the server again." echo "Backup : Stops the server (1 min), backs up the world, and restarts." echo "Help : Display this message." } #Forces case-insensitive string comparisons shopt -s nocasematch #Primary 'Switch' if [[ $1 = "start" ]] then startServer elif [[ $1 = "stop" ]] then stopServer elif [[ $1 = "stopnow" ]] then stopServerNow elif [[ $1 = "backup" ]] then backupWorld elif [[ $1 = "restart" ]] then restartServer else printCommands fi

    Read the article

  • How do I convert decimal numbers to binary in Perl?

    - by David
    I am trying to make a program that converts decimal numbers or text into binary numbers in perl. The program asks for user input of a character or string , and then prints out the result to the console. How do I do this? My code I have been working on is below, but i cannot seem to fix it. print "Enter a number to convert: "; chomp($decimal = <STDIN>); print "\nConverting $number to binary...\n"; $remainder = $decimal%2; while($decimal > 0) { $decimal/2; print $remainder; }

    Read the article

  • want to "saveas" openoffice word document into text by perl program..

    - by siva prasad
    hi all... i need a way to "saveas" .doc file in open office to .txt .i need a program in Perl which can do that automatically.that means i don't want to open that word document and go to saveas and do it...what i need is i will just give word document name and that script should give the corresponding txt file as output. one important thing is my system is Linux based one.i saw the same program for windows system here only. but i need this program in Linux. that to "antiword" ,"catdoc","wv ware" commends are not working in my Linux.. please help me regarding this. thank u in advance.

    Read the article

  • How can I walk through two files simultaneously in Perl?

    - by Alex Reynolds
    I have two text files that contain columnar data of the variety position-value, sorted by position. Here is an example of the first file (file A): 100 1 101 1 102 0 103 2 104 1 ... Here is an example of the second file (B): 20 0 21 0 ... 100 2 101 1 192 3 193 1 ... Instead of reading one of the two files into a hash table, which is prohibitive due to memory constraints, what I would like to do is walk through two files simultaneously, in a stepwise fashion. What this means is that I would like to stream through lines of either A or B and compare position values. If the two positions are equal, then I perform a calculation on the values associated with that position. Otherwise, if the positions are not equal, I move through lines of file A or file B until the positions are equal (when I again perform my calculation) or I reach EOF of both files. Is there a way to do this in Perl?

    Read the article

  • How do i make a "system call to tar files(along with exclude tag)" to work in Perl

    - by superstar
    This is the system call, i am making right now in perl to tar the files system("${tarexe} -pcvf $tarname $includepath") which works fine. $tarexe -> location of my tar.exe file $tarname -> myMock.tar $includepath -> ./input/myMockPacketName ./input/myPacket/my2/*.wav ./input/myPacket/my3 ./input/myPacket/in.html Now i want to exclude some files using exclude tag, which doesnot exclude the files system("${tarexe} -pcvf $tarname $includepath --exclude $excludepath") $excludepath -> ./input/myMockPacketName/my3 The same stament ${tarexe} -pcvf $tarname $includepath --exclude $excludepath works fine when i run it in the command line.

    Read the article

  • How do I insert a row with Perl's Net::Cassandra::Easy?

    - by knorv
    When using the Perl module Net::Cassandra::Easy to interface with Cassandra I use the following code to read colums col[123] from rows row[123] in column-family Standard1: my $cassandra = Net::Cassandra::Easy->new(keyspace => 'Keyspace1', server => 'localhost'); $cassandra->connect(); my $result = $cassandra->get(['row1', 'row2', 'row3'], family => 'Standard1', byname => ['col1', 'col2', 'col3']); This works as expected. However, when trying to insert row row1 with .. $result = $cassandra->mutate(['row1'], family => 'Standard1', insertions => { "col1" => "Value to set." }); .. I get the error message Can't use string ("0") as a SCALAR ref while "strict refs" in use at .../Net/GenThrift/Thrift/BinaryProtocol.pm line 376. What am I doing wrong?

    Read the article

  • From a Perl test file, how do I check the contents of a file?

    - by justintime
    I want to test a script I have written in Perl and specifically check what output it writes to file. I wrote it some time ago and don't want to modify it to the extent of turning it into a module but would like to regression test it before adding some small functional changes. So far I have use Test::Command tests => 10; exit_is_num($cmd, 0); .... But the command produces some files and I want to check those files are the same as I expect (either equal or match some regexp). Any suggestions

    Read the article

  • CGI Perl script is printing the HTML (along with 'content-type: text/html') string to the browser ra

    - by Wikidkaka
    I have a perl CGI script that needs to send back some HTML print qq^Content-type: text/html\n\n <HTML> <HEAD> <TITLE>some title</TITLE> ... ... ... ... ^: Instead of seeing the rendered HTML in the browser, I see the entire HTML along with the tags and the 'content-type' line in plain text. Below is how things look in the browser - Content-type: text/html <HTML> <HEAD> <TITLE>some title</TITLE> </HEAD> <BODY BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#000000" VLINK="#000000" ALINK="#000000" BACKGROUND="" onLoad=document.forms[0].elements[0].focus();>

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >