Search Results

Search found 94502 results on 3781 pages for 'server side scripting'.

Page 87/3781 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • echo value inside a variable ?

    - by Kimi
    x=102 y=x means when i echo $y it gives x echo $y x --and not 102 and when i echo $x it give 102 lets say I dnt know what is inside y and i want the value of x to be echoed with using y someting like this a=`echo $(echo $y)` echo $a Ans 102

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • sorting group of lines

    - by benjamin button
    I have a text file like below iv_destination_code_10 TAP310_mapping_RATERUSG_iv_destination_code_10 RATERUSG.iv_destination_code_10 = WORK.maf_feature_info[53,6] iv_destination_code_2 TAP310_mapping_RATERUSG_iv_destination_code_2 RATERUSG.iv_destination_code_2 = WORK.maf_feature_info[1,6] iv_destination_code_3 TAP310_mapping_RATERUSG_iv_destination_code_3 RATERUSG.iv_destination_code_3 = WORK.maf_feature_info[7,6] iv_destination_code_4 TAP310_mapping_RATERUSG_iv_destination_code_4 RATERUSG.iv_destination_code_4 = WORK.maf_feature_info[13,6] iv_destination_code_5 TAP310_mapping_RATERUSG_iv_destination_code_5 RATERUSG.iv_destination_code_5 = WORK.maf_feature_info[19,6] iv_destination_code_6 TAP310_mapping_RATERUSG_iv_destination_code_6 RATERUSG.iv_destination_code_6 = WORK.maf_feature_info[29,6] iv_destination_code_7 TAP310_mapping_RATERUSG_iv_destination_code_7 RATERUSG.iv_destination_code_7 = WORK.maf_feature_info[35,6] iv_destination_code_8 TAP310_mapping_RATERUSG_iv_destination_code_8 RATERUSG.iv_destination_code_8 = WORK.maf_feature_info[41,6] iv_destination_code_9 TAP310_mapping_RATERUSG_iv_destination_code_9 RATERUSG.iv_destination_code_9 = WORK.maf_feature_info[47,6] combination of three lines form a unit: iv_destination_code_9 TAP310_mapping_RATERUSG_iv_destination_code_9 RATERUSG.iv_destination_code_9 = WORK.maf_feature_info[47,6] is one unit. iv_destination_code_9 9 indicates the number by which i have to sort 10 9 8.... i need a shell script/awk which will sort the units in a descending order. how is it possible?

    Read the article

  • The proper way to script periodically pulling a page from an https site

    - by DarthShader
    I want to create a command-line script for Cygwin/Bash that logs into a site, navigates to a specific page and compares it with the results of the last run. So far, I have it working with Lynx like so: ----snpipped, just setting variables---- echo "# Command logfile created by Lynx 2.8.5rel.5 (29 Oct 2005) ----snipped the recorded keystrokes------- key Right Arrow key p key Right Arrow key ^U" >> $tmp1 #p, right arrow initiate the page saving #"type" the filename inside the "where to save" dialog for i in $(seq 0 $((${#tmp2} - 1))) do echo "key ${tmp2:$i:1}" >> $tmp1 done #hit enter and quit echo "key ^J key y key q key y " >> $tmp1 lynx -accept_all_cookies -cmd_script=$tmp1 https://thewebpage.com/login diff $tmp2 $oldComp mv $tmp2 $oldComp It definitely does not feel "right": the cmd_script consists of relative user actions instead of specifying the exact link names and actions. So, if anything on the site ever changes, switches places, or a new link is added - I will have to re-create the actions. Also, I can't check for any errors so I can't abort the script if something goes wrong (login failed, etc) Another alternative I have been looking at is Mechanize with Ruby (as a note - I have 0 experience with Ruby). What would be the best way to improve or rewrite this?

    Read the article

  • Shell script variable problem

    - by user280288
    Hi all, I'm trying to write a shell script to automate a job for me. But i'm currently stuck. Here's the problem : I have a variable named var1 (a decreasing number from 25 to 0 and another variable named var${var1} and this equals to some string. then when i try to call var${var1} in anywhere in script via echo it fails. I have tried $[var$var1], ${var$var} and many others but everytime it fails and gives the value of var1 or says operand expected error. Thanks for your help

    Read the article

  • SSAS Reporting Services - Set specific language / translation

    - by Chris
    Hi all, in the data warehouse there's a default language for the measures, and I added a translation for German captions. In a Visual Studio Report Server project, when creating a query with my German OS, the cube and its measures are displayed in German language. When dragging measures to the mdx query windows, the default measure name is used. That's what I want and what I expect, since when writing MDX queries I would like to use the default measure names. But when executing the query, the columns created for each measure is translated to German again. This resuls in having German columns names within my dataset, which I dont want. I'd like to have the english column names. I already tried to change the connection string to: Data Source=server;Initial Catalog=DataWarehouse;LocaleIdentifier=1033 But that doesn't help, I still see German translations. Anyone knows how to set a specific translation?

    Read the article

  • Cannot find the certificate

    - by user409756
    We get a T-SQL (SQL Server 2008 R2) error on BACKUP CERTIFICATE: ERROR_NUMBER 15151, SEVERITY 16, STATE 1, PROCEDURE -, LINE 8, MESSAGE: Cannot find the certificate 'certificate1', because it does not exist or you do not have permission. We can see the certificate in master.sys.certificates. Our pseudo-code: copy an unattached template_db to db1 attach db1 create certificate1 (in stored procedure in master db) generate @password CREATE DATABASE ENCRYPTION KEY … ENCRYPTION BY SERVER CERTIFICATE '+@certificate_name +… (in stored procedure in db1) turn on Transparent Database Encryption for db1 using certificate1. (N'ALTER DATABASE '+@db_name+N' SET ENCRYPTION ON') N’BACKUP CERTIFICATE '+@certificate_name+N' TO FILE = '''+@certificate_file_path+N''' WITH PRIVATE KEY ( FILE = '''+@private_key_file_path+N''', ENCRYPTION BY PASSWORD = '''+@password+N'''' To try to work-around the error, we tested three ways with the BACKUP CERTIFICATE code in a different databases each time, including db1 and master. All get the same error. Any ideas? Thanks.

    Read the article

  • grep value inside a variable pointing to other variable

    - by Joice
    using : ksh *abc = 1 efg = 2 hgd = 3 not known to me * say if i have Value="abc efg hgd" abc efg hgd all contains some value which i dnt know. Now I want to grep the value contained inside abc. like for i in $Value do grep "echo $(($((echo $i | cut -d'|' -f2))))" done this grep should look for the value inside abc efg hgd grep 1 grep 2 grep 3

    Read the article

  • How does the #! work?

    - by mocybin
    In a script you must include a #! on the first line followed by the path to the program that will execute the script (e.g.: sh, perl). As far as I know though, the # character denotes the start of a comment and that line is supposed to be ignored by the program executing the script. It would seem though, that this first line is at some point read by something in order for the script to be executed by the proper program. Could somebody please shed more light on the workings of the #! ? Edit: I'm really curious about this, so the more in-depth the answer the better.

    Read the article

  • SQLAuthority News – Interview with SQL Server MVP Madhivanan – A Real Problem Solver

    - by pinaldave
    Madhivanan (SQL Server MVP) is a real community hero. He is known for his two skills – 1) Help Community and 2) Help Community. I have met him many times and every time I feel if anybody in online world needs help Madhinvanan does his best to reach them out and solve problem. His name is not new if you are ready this blog or have ever asked a question in any online SQL forum. He is always there to help. When Madhivanan has time he even helps people on this blog as well. He spends his valuable time to help community only. He recently crossed over 1000 helpful comments on this blog. On that occasion, I have interviewed him to find out if he has any life outside SQL. Q 1. Tell us something about your self. I am Madhivanan ,an MSc computer Science graduate from Chennai, India and working as a Lead Analyst-Project at Ellaar Infotek Solutions Private Limited. I am basically a developer started with Visual Basic 6.0, SQL Server 2000 and Crystal Report 8. As years go on I started working more on writing queries in SQL Server in most of the projects developed in my company. I have some good level of knowledge in ORACLE, MySQL and PostgreSQL as well. Now I am leading a project develeoped in Windows Azure. Q 2. What motivates you to help people on community and forums. When I got some errors during the application development in my early days of my career, I got good solutions from online forums and weblogs. So I decided to help others if possible. When I visit forums and help people if I know the answer to the questions. I am one of the leading posters at www.sqlteam.com and also a moderator at www.sql-server-performance.com. I also take part in Visual Basic and Crystal Reports forums. I have been SQL Server MVP since 2007. Q 3. Your personal life is not much known. Tell us something about your personal life. I am happily married person. My wife is a B.Pharm graduate. I have a son who is now 18 months old. Q 4. Where can we read further for your community activity. I have a blog at http://beyondrelational.com/blogs/madhivanan where you can find most of my T-sql stuffs Q 5. When not working with SQL what do you do? When not working with SQL, I spend time playing with my son, reading some magazines and watching TV. Madhivanan for your work and help to community, a true salute to you. Hats off my friend. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • SQL SERVER – How to Get SQL Server Restart Notification?

    - by Pinal Dave
    Few days back my friend called me to know if there is any tool which can be used to get restart notification about SQL in their environment. I told that SQL Server can do it by itself with some configurations. He was happy and surprised to know that he need not spend any extra money. In SQL Server, we can configure stored procedure(s) to run at start-up of SQL Server. This blog would give steps to achieve how to achieve it. There are many situations where this feature can be used. Below are few. Logging SQL Server startup timings Modify data in some table during startup (i.e. table in tempdb) Sending notification about SQL start. Step 1 – Enable ‘scan for startup procs’ This can be done either using T-SQL or User Interface of Management Studio. EXEC sys.sp_configure N'Show Advanced Options', N'1' GO RECONFIGURE WITH OVERRIDE GO EXEC sys.sp_configure N'scan for startup procs', N'1' GO RECONFIGURE WITH OVERRIDE GO Below is the interface to change the setting. We need to go to “Server” > “Properties” and use “Advanced” tab. “Scan for Startup Procs” is the parameter under “Miscellaneous” section as shown below. We need to make value as “True” and hit OK. Step 2 – Create stored procedure It’s important to note that the procedure is executed after recovery is finished for ALL databases. Here is a sample stored procedure. You can use your own logic in the procedure. CREATE PROCEDURE SQLStartupProc AS BEGIN CREATE TABLE ##ThisTableShouldAlwaysExists (AnyColumn INT) END Step 3 – Set Procedure to run at startup We need to use sp_procoption to mark the procedure to run at startup. Here is the code to let SQL know that this is startup proc. sp_procoption 'SQLStartupProc', 'startup', 'true' This can be used only for procedures in master database. Msg 15398, Level 11, State 1, Procedure sp_procoption, Line 89 Only objects in the master database owned by dbo can have the startup setting changed. We also need to remember that such procedure should not have any input/output parameter. Here is the error which would be raised. Msg 15399, Level 11, State 1, Procedure sp_procoption, Line 107 Could not change startup option because this option is restricted to objects that have no parameters. Verification Here is the query to find which procedures is marked as startup procedures. SELECT name FROM sys.objects WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1 Once this is done, I have restarted SQL instance and here is what we would see in SQL ERRORLOG Launched startup procedure 'SQLStartupProc'. This confirms that stored procedure is executed. You can also notice that this is done after all databases are recovered. Recovery is complete. This is an informational message only. No user action is required. After few days my friend again called me and asked – I want to turn this OFF? Use comments section and post the answer for him.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • 3?????????????(Fusion Middleware??)

    - by rika.tokumichi
    ??????????OTN????????? 4????????????????????????????????????????????????????? ???????????????????????????????? ??????????????????????????4??????????????????????????????????^^ ??????Fusion Middleware??3?1?~3?31?????????????! ??????????? 1?:Oracle WebLogic Server 11g?Download? 2?:Oracle JDeveloper 11g?Download? 3?:Oracle JRockit Mission Control 3.1.2?Download? 4?:Oracle JDeveloper 10g?Download? 5?:Oracle SOA Suite 11g ?Download? (????3?1?~3?31?) 4??Oracle JDeveloper 10g?????????????????????????????????????????????????????????????????WebLogic Server??????????????????? OTN?????????? >WebLogic ~OTN???? ???????????~ WebLogic Server???????(?????????????)???PDF????Flash???????????????????? ???????: ?????????????WebLogic Server????????????? ????????????????????~Oracle WebLogic Server 11g~ Oracle WebLogic Server???????Web??????? -??? ???!!????????·???????!?~Oracle Weblogic Server ???~ OTN ??? Oracle WebLogic Server 11g?WebLogic Server?????????????~????????????????IT??????????????·????~ ?????| ????? ???WebLogic Server???Oracle WebLogic Server 11g???????????????????????????????????????2??????????????? >????????Oracle WebLogic Server 11g???????????????????????? BEA?????WebLogic Server??????????????????????????????????????????????Oracle WebLogic Server 11g????????????????????????????????????WebLogic Server 8.1?????????????????????????????????????????? ???? >How To: WebLogic Server??????????????? ?????????????????????????????????????????????Oracle WebLogic Server???????????????????????·????????????????????????????????? ????????????????????????????????????1) ????????·?????????? 2) ????????????WebLogic?????? 3. WebLogic?????????(WLDF)?3??????????? >????????????WebLogic Server????????????? Oracle WebLogic Server??????????????????????????????????????????????????????????????????????????WebLogic Server??????Java??????????·????????????????????????????????????? ????????????Oracle Direct Seminar(????)?????????? OTN?????????????????????OTN???? ?????? ??????????????????? ???? >Oracle WebLogic Server ??????????????????????!

    Read the article

  • Using the Apple Scripting bridge with Mail to send an attachment causes the message background to go

    - by Naym
    Hello, When I use the Apple Scripting Bridge to send a message with an attachment the background of the message is set to black which is a problem because the text is also black. The code in question is: MailApplication *mail = [SBApplication applicationWithBundleIdentifier:@"com.apple.Mail"]; /* create a new outgoing message object */ MailOutgoingMessage *emailMessage = [[[mail classForScriptingClass:@"outgoing message"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: emailSubject, @"subject", [self composeEmailBody], @"content", nil]]; /* add the object to the mail app */ [[mail outgoingMessages] addObject: emailMessage]; /* set the sender, show the message */ emailMessage.sender = [NSString stringWithFormat:@"%@ <%@>",[[[mail accounts] objectAtIndex:playerOptions.mailAccount] fullName],[[[[mail accounts] objectAtIndex:playerOptions.mailAccount] emailAddresses] objectAtIndex:0]]; emailMessage.visible = YES; /* create a new recipient and add it to the recipients list */ MailToRecipient *theRecipient = [[[mail classForScriptingClass:@"to recipient"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: opponentEmail, @"address", nil]]; [emailMessage.toRecipients addObject: theRecipient]; /* add an attachment, if one was specified */ if ( [playerInfo.gameFile length] > 0 ) { /* create an attachment object */ MailAttachment *theAttachment = [[[mail classForScriptingClass:@"attachment"] alloc] initWithProperties: [NSDictionary dictionaryWithObjectsAndKeys: playerInfo.gameFile, @"fileName", nil]]; /* add it to the list of attachments */ [[emailMessage.content attachments] addObject: theAttachment]; } /* send the message */ [emailMessage send]; The actual change to the background colour occurs on the second last line, which is: [[emailMessage.content attachments] addObject: theAttachment]; The code sections above are essentially lifted from the SBSendMail example code from Apple. At this stage I've really only made the changes necessary to integrate with the data from my application. If I build and run the SBSendMail example after freshly downloading it from Apple the message background is also changed to black with execution of the same line. It does not appear to matter which type of file is attached, where it is located, or on which computer or operating system is used. This could be a bug in Apple's scripting bridge but has anyone come across this problem and found a solution? ALternatively does anyone know if the background colour of a MailOutgoingMessage instance can be changed with the scripting bridge? Thanks for any assistance with this, Naym.

    Read the article

  • SQL SERVER – Best Reference – Wait Type – Day 27 of 28

    - by pinaldave
    I have great learning experience to write my article series on Extended Event. This was truly learning experience where I have learned way more than I would have learned otherwise. Besides my blog series there was excellent quality reference available on internet which one can use to learn this subject further. Here is the list of resources (in no particular order): sys.dm_os_wait_stats (Book OnLine) – This is excellent beginning point and official documentations on the wait types description. SQL Server Best Practices Article by Tom Davidson – I think this document goes without saying the BEST reference available on this subject. Performance Tuning with Wait Statistics by Joe Sack – One of the best slide deck available on this subject. It covers many real world scenarios. Wait statistics, or please tell me where it hurts by Paul Randal – Notes from real world from SQL Server Skilled Master Paul Randal. The SQL Server Wait Type Repository… by Bob Ward – A thorough article on wait types and its resolution. A MUST read. Tracking Session and Statement Level Waits by by Jonathan Kehayias – A unique article on the subject where wait stats and extended events are together. Wait Stats Introductory References By Jimmy May – Excellent collection of the reference links. Great Resource On SQL Server Wait Types by Glenn Berry – A perfect DMV to find top wait stats. Performance Blog by Idera – In depth article on top of the wait statistics in community. I have listed all the reference I have found in no particular order. If I have missed any good reference, please leave a comment and I will add the reference in the list. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Tracking Session and Statement Level Waits Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Print SSRS Report / PDF automatically from SQL Server agent or Windows Service

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/10/22/print-ssrs-report--pdf-from-sql-server-agent-or.aspxI have turned the Web upside-down to find a solution to this considering the least components and least maintenance as possible to achieve automated printing of an SSRS report. This is for the reason that we do not have a full software development team to maintain an app and we have to minimize the support overhead for the support team.Here is my setup:SQL Server 2008 R2 in Windows Server 2008 R2PDF format reports generated by SSRS Reports subscriptions to a Windows File ShareNetwork printerColoured reports with logo and brandingI have found and tested the following solutions to no avail:ProsConsCalling Adobe Acrobat Reader exe: "C:\Program Files (x86)\Adobe\Reader 11.0\Reader\acroRd32.exe" /n /s /o /h /t "C:\temp\print.pdf" \\printserver\printername"Very simple optionAdobe Acrobat reader requires to launch the GUI to send a job to a printer. Hence, this option cannot be used when printing from a service.Calling Adobe Acrobat Reader exe as a process from a .NET console appA bit harder than above, but still a simple solutionSame as cons abovePowershell script(Start-Process -FilePath "C:\temp\print.pdf" -Verb Print)Very simple optionUses default PDF client in quiet mode to Print, but also requires an active session.    Foxit ReaderVery simple optionRequires GUI same as Adobe Acrobat Reader Using the Reporting Services Web service to run and stream the report to an image object and then passed to the printerQuite complexThis is what we're trying to avoid  After pulling my hair out for two days, testing and evaluating the above solutions, I ended up learning more about printers (more than ever in my entire life) and how printer drivers work with PostScripts. I then bumped on to a PostScript interpreter called GhostScript (http://www.ghostscript.com/) and then the solution starts to get clearer and clearer.I managed to achieve a solution (maybe not be the simplest but efficient enough to achieve the least-maintenance-least-components goal) in 3-simple steps:Install GhostScript (http://www.ghostscript.com/download/) - this is an open-source PostScript and PDF interpreter. Printing directly using GhostScript only produces grayscale prints using the laserjet generic driver unless you save as BMP image and then interpret the colours using the imageInstall GSView (http://pages.cs.wisc.edu/~ghost/gsview/)- this is a GhostScript add-on to make it easier to directly print to a Windows printer. GSPrint automates the above  PDF -> BMP -> Printer Driver.Run the GSPrint command from SQL Server agent or Windows Service:"C:\Program Files\Ghostgum\gsview\gsprint.exe" -color -landscape -all -printer "printername" "C:\temp\print.pdf"Command line options are here: http://pages.cs.wisc.edu/~ghost/gsview/gsprint.htmAnother lesson learned is, since you are calling the script from the Service Account, it will not necessarily have the Printer mapped in its Windows profile (if it even has one). The workaround to this is by adding a local printer as you normally would and then map this printer to the network printer. Note that you may need to install the Printer Driver locally in the server.So, that's it! There are many ways to achieve a solution. The key thing is how you provide the smartest solution!

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View – Part 2

    - by pinaldave
    Earlier, I have written an article about SQL SERVER – Index Created on View not Used Often – Observation of the View. I received an email from one of the readers, asking if there would no problems when we create the Index on the base table. Well, we need to discuss this situation in two different cases. Before proceeding to the discussion, I strongly suggest you read my earlier articles. To avoid the duplication, I am not going to repeat the code and explanation over here. In all the earlier cases, I have explained in detail how Index created on the View is not utilized. SQL SERVER – Index Created on View not Used Often – Limitation of the View 12 SQL SERVER – Index Created on View not Used Often – Observation of the View SQL SERVER – Indexed View always Use Index on Table As per earlier blog posts, so far we have done the following: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View However, the blog reader who emailed me suggests the extension of the said logic, which is as follows: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View Create Index on the Base Table Write SELECT with ORDER BY on View After doing the last two steps, the question is “Will the query on the View utilize the Index on the View, or will it still use the Index of the base table?“ Let us first run the Create example. USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO -- Create Index on Original Table -- On Column ID1 CREATE UNIQUE CLUSTERED INDEX [IX_OriginalTable] ON mySampleTable ( ID1 ASC ) GO -- On Column ID2 CREATE UNIQUE NONCLUSTERED INDEX [IX_OriginalTable_ID2] ON mySampleTable ( ID2 ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO Now let us see the execution plans for both of the SELECT statement. Before Index on Base Table (with Index on View): After Index on Base Table (with Index on View): Looking at both executions, it is very clear that with or without, the View is using Indexes. Alright, I have written 11 disadvantages of the Views. Now I have written one case where the View is using Indexes. Anybody who says that I am being harsh on Views can say now that I found one place where Index on View can be helpful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big

    - by pinaldave
    After reading my earlier article SQL SERVER – master Database Log File Grew Too Big, I received an email recently from another reader asking why does the log file of model database grow every day when he is not carrying out any operation in the model database. As per the email, he is absolutely sure that he is doing nothing on his model database; he had used policy management to catch any T-SQL operation in the model database and there were none. This was indeed surprising to me. I sent a request to access to his server, which he happily agreed for and within a min, we figured out the issue. He was taking the backup of the model database every day taking the database backup every night. When I explained the same to him, he did not believe it; so I quickly wrote down the following script. The results before and after the usage of the script were very clear. What is a model database? The model database is used as the template for all databases created on an instance of SQL Server. Any object you create in the model database will be automatically created in subsequent user database created on the server. NOTE: Do not run this in production environment. During the demo, the model database was in full recovery mode and only full backup operation was performed (no log backup). Before Backup Script Backup Script in loop DECLARE @FLAG INT SET @FLAG = 1 WHILE(@FLAG < 1000) BEGIN BACKUP DATABASE [model] TO  DISK = N'D:\model.bak' SET @FLAG = @FLAG + 1 END GO After Backup Script Why did this happen? The model database was in full recovery mode and taking full backup is logged operation. As there was no log backup and only full backup was performed on the model database, the size of the log file kept growing. Resolution: Change the backup mode of model database from “Full Recovery” to “Simple Recovery.”. Take full backup of the model database “only” when you change something in the model database. Let me know if you have encountered a situation like this? If so, how did you resolve it? It will be interesting to know about your experience. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – First SQL Bangalore Event Report – Nov 24, 2012 – SQL Server User Group Bangalore

    - by pinaldave
    A very common question I often receive - Do we have SQL Server User Group in Bangalore? Yes! SQL Bangalore – we had very first meeting on Nov 24, 2012 and very soon we are going to have another User Group meeting. The goal is to keep up a monthly rhythm of User Group meeting. If you are in Bangalore area please join the Facebook page and you will keep on getting regular update about SQL Server. In the very first meeting we have five 30 minute session and had a fantastic time. We had the best of the best speakers presenting all the five sessions. The event was inaugurated by Vinod Kumar M by presenting on T-SQL Pitfalls. His excellent and eye-opening session was followed by Manas Dash. He enlightened everybody with functions introduced in SQL Server 2012. We had a surprise guest from Mumbai – Raj Chaodhary. If you know him he has a very interesting way to present sessions and he presented on SQL Joins. His hard to follow act was followed by Sudeepta who presented on Contained Database. This subject is quite entertaining and interesting. My session was last in order and I was eagerly waiting to present. I had decided to do something new this time so I had created around 52 slides and two demos. I was committed to go over all the 52 slides and both of the demos in 25 minutes of the time. I had interesting story as well. Though, I was a bit nervous I was able to go over a complete slide deck and demo in 25 minutes of the time I had. We also were very fortunate to have international guest Lynn Langit from USA present at the event as well. She presented an overview of the Big Data in very little time – something not everyone can do it efficiently. We are very thankful to our sponsor Pluralsight for awarding USD 300 worth Annual Subscription. It was the most awaited moment of the day. Well, overall we had a great fun with 100+ attendees learning SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Scripting with the Sun ZFS Storage 7000 Appliance

    - by Geoff Ongley
    The Sun ZFS Storage 7000 appliance has a user friendly and easy to understand graphical web based interface we call the "BUI" or "Browser User Interface".This interface is very useful for many tasks, but in some cases a script (or workflow) may be more appropriate, such as:Repetitive tasksTasks which work on (or obtain information about) a large number of shares or usersTasks which are triggered by an alert threshold (workflows)Tasks where you want a only very basic input, but a consistent output (workflows)The appliance scripting language is based on ECMAscript 3 (close to javascript). I'm not going to cover ECMAscript 3 in great depth (I'm far from an expert here), but I would like to show you some neat things you can do with the appliance, to get you started based on what I have found from my own playing around.I'm making the assumption you have some sort of programming background, and understand variables, arrays, functions to some extent - but of course if something is not clear, please let me know so I can fix it up or clarify it.Variable Declarations and ArraysVariablesECMAScript is a dynamically and weakly typed language. If you don't know what that means, google is your friend - but at a high level it means we can just declare variables with no specific type and on the fly.For example, I can declare a variable and use it straight away in the middle of my code, for example:projects=list();Which makes projects an array of values that are returned from the list(); function (which is usable in most contexts). With this kind of variable, I can do things like:projects.length (this property on array tells you how many objects are in it, good for for loops etc). Alternatively, I could say:projects=3;and now projects is just a simple number.Should we declare variables like this so loosely? In my opinion, the answer is no - I feel it is a better practice to declare variables you are going to use, before you use them - and given them an initial value. You can do so as follows:var myVariable=0;To demonstrate the ability to just randomly assign and change the type of variables, you can create a simple script at the cli as follows (bold for input):fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("Number of projects is: %d\n",projects.length);("." to run)> projects=152;("." to run)> printf("Value of the projects variable as an integer is now: %d\n",projects);("." to run)> .Number of projects is: 7Value of the projects variable as an integer is now: 152You can also confirm this behaviour by checking the typeof variable we are dealing with:fishy10:> script("." to run)> run("cd /");("." to run)> run ("shares");("." to run)> var projects;("." to run)> projects=list();("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> projects=152;("." to run)> printf("var projects is of type %s\n",typeof(projects));("." to run)> .var projects is of type objectvar projects is of type numberArraysSo you likely noticed that we have already touched on arrays, as the list(); (in the shares context) stored an array into the 'projects' variable.But what if you want to declare your own array? Easy! This is very similar to Java and other languages, we just instantiate a brand new "Array" object using the keyword new:var myArray = new Array();will create an array called "myArray".A quick example:fishy10:> script("." to run)> testArray = new Array();("." to run)> testArray[0]="This";("." to run)> testArray[1]="is";("." to run)> testArray[2]="just";("." to run)> testArray[3]="a";("." to run)> testArray[4]="test";("." to run)> for (i=0; i < testArray.length; i++)("." to run)> {("." to run)>    printf("Array element %d is %s\n",i,testArray[i]);("." to run)> }("." to run)> .Array element 0 is ThisArray element 1 is isArray element 2 is justArray element 3 is aArray element 4 is testWorking With LoopsFor LoopFor loops are very similar to those you will see in C, java and several other languages. One of the key differences here is, as you were made aware earlier, we can be a bit more sloppy with our variable declarations.The general way you would likely use a for loop is as follows:for (variable; test-case; modifier for variable){}For example, you may wish to declare a variable i as 0; and a MAX_ITERATIONS variable to determine how many times this loop should repeat:var i=0;var MAX_ITERATIONS=10;And then, use this variable to be tested against some case existing (has i reached MAX_ITERATIONS? - if not, increment i using i++);for (i=0; i < MAX_ITERATIONS; i++){ // some work to do}So lets run something like this on the appliance:fishy10:> script("." to run)> var i=0;("." to run)> var MAX_ITERATIONS=10;("." to run)> for (i=0; i < MAX_ITERATIONS; i++)("." to run)> {("." to run)>    printf("The number is %d\n",i);("." to run)> }("." to run)> .The number is 0The number is 1The number is 2The number is 3The number is 4The number is 5The number is 6The number is 7The number is 8The number is 9While LoopWhile loops again are very similar to other languages, we loop "while" a condition is met. For example:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n",counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Counter is 10Loop has ended and Counter is 11So what do we notice here? Something has actually gone wrong - counter will technically be 11 once the loop completes... Why is this?Well, if we have a loop like this, where the 'while' condition that will end the loop may be set based on some other condition(s) existing (such as the counter has reached 10) - we must ensure that we  terminate this iteration of the loop when the condition is met - otherwise the rest of the code will be followed which may not be desirable. In other words, like in other languages, we will only ever check the loop condition once we are ready to perform the next iteration, so any other code after we set "isTen" to be true, will still be executed as we can see it was above.We can avoid this by adding a break into our loop once we know we have set the condition - this will stop the rest of the logic being processed in this iteration (and as such, counter will not be incremented). So lets try that again:fishy10:> script("." to run)> var isTen=false;("." to run)> var counter=0;("." to run)> while(isTen==false)("." to run)> {("." to run)>    if (counter==10) ("." to run)>    { ("." to run)>            isTen=true;   ("." to run)>            break;("." to run)>    } ("." to run)>    printf("Counter is %d\n",counter);("." to run)>    counter++;    ("." to run)> }("." to run)> printf("Loop has ended and Counter is %d\n", counter);("." to run)> .Counter is 0Counter is 1Counter is 2Counter is 3Counter is 4Counter is 5Counter is 6Counter is 7Counter is 8Counter is 9Loop has ended and Counter is 10Much better!Methods to Obtain and Manipulate DataGet MethodThe get method allows you to get simple properties from an object, for example a quota from a user. The syntax is fairly simple:var myVariable=get('property');An example of where you may wish to use this, is when you are getting a bunch of information about a user (such as quota information when in a shares context):var users=list();for(k=0; k < users.length; k++){     user=users[k];     run('select ' + user);     var username=get('name');     var usage=get('usage');     var quota=get('quota');...Which you can then use to your advantage - to print or manipulate infomation (you could change a user's information with a set method, based on the information returned from the get method). The set method is explained next.Set MethodThe set method can be used in a simple manner, similar to get. The syntax for set is:set('property','value'); // where value is a string, if it was a number, you don't need quotesFor example, we could set the quota on a share as follows (first observing the initial value):fishy10:shares default/test-geoff> script("." to run)> var currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> set('quota','30G');("." to run)> run('commit');("." to run)> currentQuota=get('quota');("." to run)> printf("Current Quota is: %s\n",currentQuota);("." to run)> .Current Quota is: 0Current Quota is: 32212254720This shows us using both the get and set methods as can be used in scripts, of course when only setting an individual share, the above is overkill - it would be much easier to set it manually at the cli using 'set quota=3G' and then 'commit'.List MethodThe list method can be very powerful, especially in more complex scripts which iterate over large amounts of data and manipulate it if so desired. The general way you will use list is as follows:var myVar=list();Which will make "myVar" an array, containing all the objects in the relevant context (this could be a list of users, shares, projects, etc). You can then gather or manipulate data very easily.We could list all the shares and mountpoints in a given project for example:fishy10:shares another-project> script("." to run)> var shares=list();("." to run)> for (i=0; i < shares.length; i++)("." to run)> {("." to run)>    run('select ' + shares[i]);("." to run)>    var mountpoint=get('mountpoint');("." to run)>    printf("Share %s discovered, has mountpoint %s\n",shares[i],mountpoint);("." to run)>    run('done');("." to run)> }("." to run)> .Share and-another discovered, has mountpoint /export/another-project/and-anotherShare another-share discovered, has mountpoint /export/another-project/another-shareShare bob discovered, has mountpoint /export/another-projectShare more-shares-for-all discovered, has mountpoint /export/another-project/more-shares-for-allShare yep discovered, has mountpoint /export/another-project/yepWriting More Complex and Re-Usable CodeFunctionsThe best way to be able to write more complex code is to use functions to split up repeatable or reusable sections of your code. This also makes your more complex code easier to read and understand for other programmers.We write functions as follows:function functionName(variable1,variable2,...,variableN){}For example, we could have a function that takes a project name as input, and lists shares for that project (assuming we're already in the 'project' context - context is important!):function getShares(proj){        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                printf("Discovered share: %s\n",shares[i]);        }        run('done'); // exit selected project}Commenting your CodeLike any other language, a large part of making it readable and understandable is to comment it. You can use the same comment style as in C and Java amongst other languages.In other words, sngle line comments use://at the beginning of the comment.Multi line comments use:/*at the beginning, and:*/ at the end.For example, here we will use both:fishy10:> script("." to run)> // This is a test comment("." to run)> printf("doing some work...\n");("." to run)> /* This is a multi-line("." to run)> comment which I will span across("." to run)> three lines in total */("." to run)> printf("doing some more work...\n");("." to run)> .doing some work...doing some more work...Your comments do not have to be on their own, they can begin (particularly with single line comments this is handy) at the end of a statement, for examplevar projects=list(); // The variable projects is an array containing all projects on the system.Try and Catch StatementsYou may be used to using try and catch statements in other languages, and they can (and should) be utilised in your code to catch expected or unexpected error conditions, that you do NOT wish to stop your code from executing (if you do not catch these errors, your script will exit!):try{  // do some work}catch(err) // Catch any error that could occur{ // do something here under the error condition}For example, you may wish to only execute some code if a context can be reached. If you can't perform certain actions under certain circumstances, that may be perfectly acceptable.For example if you want to test a condition that only makes sense when looking at a SMB/NFS share, but does not make sense when you hit an iscsi or FC LUN, you don't want to stop all processing of other shares you may not have covered yet.For example we may wish to obtain quota information on all shares for all users on a share (but this makes no sense for a LUN):function getShareQuota(shar) // Get quota for each user of this share{        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","----");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}Running Scripts Remotely over SSHAs you have likely noticed, writing and running scripts for all but the simplest jobs directly on the appliance is not going to be a lot of fun.There's a couple of choices on what you can do here:Create scripts on a remote system and run them over sshCreate scripts, wrapping them in workflow code, so they are stored on the appliance and can be triggered under certain circumstances (like a threshold being reached)We'll cover the first one here, and then cover workflows later on (as these are for the most part just scripts with some wrapper information around them).Creating a SSH Public/Private SSH Key PairLog on to your handy Solaris box (You wouldn't be using any other OS, right? :P) and use ssh-keygen to create a pair of ssh keys. I'm storing this separate to my normal key:[geoff@lightning ~] ssh-keygen -t rsa -b 1024Generating public/private rsa key pair.Enter file in which to save the key (/export/home/geoff/.ssh/id_rsa): /export/home/geoff/.ssh/nas_key_rsaEnter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /export/home/geoff/.ssh/nas_key_rsa.Your public key has been saved in /export/home/geoff/.ssh/nas_key_rsa.pub.The key fingerprint is:7f:3d:53:f0:2a:5e:8b:2d:94:2a:55:77:66:5c:9b:14 geoff@lightningInstalling the Public Key on the ApplianceOn your Solaris host, observe the public key:[geoff@lightning ~] cat .ssh/nas_key_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc= geoff@lightningNow, copy and paste everything after "ssh-rsa" and before "user@hostname" - in this case, geoff@lightning. That is, this bit:AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc=Logon to your appliance and get into the preferences -> keys area for this user (root):[geoff@lightning ~] ssh [email protected]: Last login: Mon Dec  6 17:13:28 2010 from 192.168.0.2fishy10:> configuration usersfishy10:configuration users> select rootfishy10:configuration users root> preferences fishy10:configuration users root preferences> keysOR do it all in one hit:fishy10:> configuration users select root preferences keysNow, we create a new public key that will be accepted for this user and set the type to RSA:fishy10:configuration users root preferences keys> createfishy10:configuration users root preferences key (uncommitted)> set type=RSASet the key itself using the string copied previously (between ssh-rsa and user@host), and set the key ensuring you put double quotes around it (eg. set key="<key>"):fishy10:configuration users root preferences key (uncommitted)> set key="AAAAB3NzaC1yc2EAAAABIwAAAIEAvYfK3RIaAYmMHBOvyhKM41NaSmcgUMC3igPN5gUKJQvSnYmjuWG6CBr1CkF5UcDji7v19jG3qAD5lAMFn+L0CxgRr8TNaAU+hA4/tpAGkjm+dKYSyJgEdMIURweyyfUFXoerweR8AWW5xlovGKEWZTAfvJX9Zqvh8oMQ5UJLUUc="Now set the comment for this key (do not use spaces):fishy10:configuration users root preferences key (uncommitted)> set comment="LightningRSAKey" Commit the new key:fishy10:configuration users root preferences key (uncommitted)> commitVerify the key is there:fishy10:configuration users root preferences keys> lsKeys:NAME     MODIFIED              TYPE   COMMENT                                  key-000  2010-10-25 20:56:42   RSA    cycloneRSAKey                           key-001  2010-12-6 17:44:53    RSA    LightningRSAKey                         As you can see, we now have my new key, and a previous key I have created on this appliance.Running your Script over SSH from a Remote SystemHere I have created a basic test script, and saved it as test.ecma3:[geoff@lightning ~] cat test.ecma3 script// This is a test script, By Geoff Ongley 2010.printf("Testing script remotely over ssh\n");.Now, we can run this script remotely with our keyless login:[geoff@lightning ~] ssh -i .ssh/nas_key_rsa root@fishy10 < test.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Testing script remotely over sshPutting it Together - An Example Completed Quota Gathering ScriptSo now we have a lot of the basics to creating a script, let us do something useful, like, find out how much every user is using, on every share on the system (you will recognise some of the code from my previous examples): script/************************************** Quick and Dirty Quota Check script ** Written By Geoff Ongley            ** 25 October 2010                    **************************************/function getUserQuota(usr){        run('select ' + usr);        var username=get('name');        var usage=get('usage');        var quota=get('quota');        var usage_g=usage / 1073741824; // convert bytes to gigabytes        var quota_g=quota / 1073741824; // as above        var quota_percent=0        if (quota > 0)        {                quota_percent=(usage / quota)*(100/1);        }        printf("    %20s        %8.2f           %8.2f           %d%%\n",username,usage_g,quota_g,quota_percent);        run('done'); // done with this selected user}function getShareQuota(shar){        //printf("DEBUG: selecting share %s\n", shar);        run('select ' + shar);        printf("  SHARE: %s\n", shar);        try        {                run('users');                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","--------");                                users=list();                for(k=0; k < users.length; k++)                {                        user=users[k];                        getUserQuota(user);                }                run('done'); // exit user context        }        catch(err)        {                printf("    SKIPPING %s - This is NOT a NFS or CIFs share, not looking for users\n", shar);        }        run('done'); // done with this share}function getShares(proj){        //printf("DEBUG: selecting project %s\n",proj);        run('select ' + proj);        shares=list();        printf("Project: %s\n", proj);        for(j=0; j < shares.length; j++)        {                share=shares[j];                getShareQuota(share);        }        run('done'); // exit selected project}function getProjects(){        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                var project=projects[i];                getShares(project);        }        run('done'); // exit context for all projects}getProjects();.Which can be run as follows, and will print information like this:[geoff@lightning ~/FISHWORKS_SCRIPTS] ssh -i ~/.ssh/nas_key_rsa root@fishy10 < get_quota_utilisation.ecma3Pseudo-terminal will not be allocated because stdin is not a terminal.Project: another-project  SHARE: and-another                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                 geoffro            0.05            0.00        0%                   Billy            0.10            0.00        0%                    root            0.00            0.00        0%            testing-user            0.05            0.00        0%  SHARE: another-share                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                 geoffro            0.05            0.49        9%            testing-user            0.05            0.02        249%                   Billy            0.10            0.29        33%  SHARE: bob                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%  SHARE: more-shares-for-all                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                   Billy            0.10            0.00        0%            testing-user            0.05            0.00        0%                  nobody            0.00            0.00        0%                    root            0.00            0.00        0%                 geoffro            0.05            0.00        0%  SHARE: yep                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                    root            0.00            0.00        0%                  nobody            0.00            0.00        0%                   Billy            0.10            0.01        999%            testing-user            0.05            0.49        9%                 geoffro            0.05            0.00        0%Project: default  SHARE: Test-LUN    SKIPPING Test-LUN - This is NOT a NFS or CIFs share, not looking for users  SHARE: test-geoff                Username           Usage(G)       Quota(G)    Quota(%)                --------           --------       --------    --------                 geoffro            0.05            0.00        0%                    root            3.18           10.00        31%                    uucp            0.00            0.00        0%                  nobody            0.59            0.49        119%^CKilled by signal 2.Creating a WorkflowWorkflows are scripts that we store on the appliance, and can have the script execute either on request (even from the BUI), or on an event such as a threshold being met.Workflow BasicsA workflow allows you to create a simple process that can be executed either via the BUI interface interactively, or by an alert being raised (for some threshold being reached, for example).The basics parameters you will have to set for your "workflow object" (notice you're creating a variable, that embodies ECMAScript) are as follows (parameters is optional):name: A name for this workflowdescription: A Description for the workflowparameters: A set of input parameters (useful when you need user input to execute the workflow)execute: The code, the script itself to execute, which will be function (parameters)With parameters, you can specify things like this (slightly modified sample taken from the System Administration Guide):          ...parameters:        variableParam1:         {                             label: 'Name of Share',                             type: 'String'                  },                  variableParam2                  {                             label: 'Share Size',                             type: 'size'                  },execute: ....};  Note the commas separating the sections of name, parameters, execute, and so on. This is important!Also - there is plenty of properties you can set on the parameters for your workflow, these are described in the Sun ZFS Storage System Administration Guide.Creating a Basic Workflow from a Basic ScriptTo make a basic script into a basic workflow, you need to wrap the following around your script to create a 'workflow' object:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function() {// (basic script goes here, minus the "script" at the beginning, and "." at the end)}};However, it appears (at least in my experience to date) that the workflow object may only be happy with one function in the execute parameter - either that or I'm doing something wrong. As far as I can tell, after execute: you should only have a basic one function context like so:execute: function(){}To deal with this, and to give an example similar to our script earlier, I have created another simple quota check, to show the same basic functionality, but in a workflow format:var workflow = {name: 'Get User Quotas',description: 'Displays Quota Utilisation for each user on each share',execute: function () {        run('cd /');        run('shares');        projects=list();                for (i=0; i < projects.length; i++)        {                run('select ' + projects[i]);                shares=list('filesystem');                printf("Project: %s\n", projects[i]);                for(j=0; j < shares.length; j++)                {                        run('select ' +shares[j]);                        try                        {                                run('users');                                printf("  SHARE: %s\n", shares[j]);                                printf("    %20s        %11s    %11s    %3s\n","Username","Usage(G)","Quota(G)","Quota(%)");                                printf("    %20s        %11s    %11s    %4s\n","--------","--------","--------","-------");                                users=list();                                for(k=0; k < users.length; k++)                                {                                        run('select ' + users[k]);                                        username=get('name');                                        usage=get('usage');                                        quota=get('quota');                                        usage_g=usage / 1073741824; // convert bytes to gigabytes                                        quota_g=quota / 1073741824; // as above                                        quota_percent=0                                        if (quota > 0)                                        {                                                quota_percent=(usage / quota)*(100/1);                                        }                                        printf("    %20s        %8.2f   %8.2f   %d%%\n",username,usage_g,quota_g,quota_percent);                                        run('done');                                }                                run('done'); // exit user context                        }                        catch(err)                        {                        //      printf("    %s is a LUN, Not looking for users\n", shares[j]);                        }                        run('done'); // exit selected share context                }                run('done'); // exit project context        }        }};SummaryThe Sun ZFS Storage 7000 Appliance offers lots of different and interesting features to Sun/Oracle customers, including the world renowned Analytics. Hopefully the above will help you to think of new creative things you could be doing by taking advantage of one of the other neat features, the internal scripting engine!Some references are below to help you continue learning more, I'll update this post as I do the same! Enjoy...More information on ECMAScript 3A complete reference to ECMAScript 3 which will help you learn more of the details you may be interested in, can be found here:http://www.ecma-international.org/publications/files/ECMA-ST-ARCH/ECMA-262,%203rd%20edition,%20December%201999.pdfMore Information on Administering the Sun ZFS Storage 7000The Sun ZFS Storage 7000 System Administration guide can be a useful reference point, and can be found here:http://wikis.sun.com/download/attachments/186238602/2010_Q3_2_ADMIN.pdf

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • SQL SERVER – Preserve Leading Zero While Coping to Excel from SSMS

    - by pinaldave
    Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. The step here is to change the format of Excel Cell to Text from Numbers and that will preserve the value of the with leading or trailing Zeros in Excel. However, I assume this is done for display purpose only because once you convert column to Text you may find it difficult to do numeric operations over the column for example Aggregation, Average etc. If you need to do the same you should either convert the columns back to Numeric in Excel or do the process in Database and export the same value as along with it as well. However, I have seen in requirement in the real world where the user has to have a numeric value with leading Zero values in it for display purpose. Here is my suggestion, instead of manipulating numeric value in the database and converting it to character value the ideal thing to do is to store it as a numeric value only in the database. Whatever changes you want to do for display purpose should be handled at the time of the display using the format function of SQL or Application Language. Honestly, database is data layer and presentation is presentation layer – they are two different things and if possible they should not be mixed. If due to any reason you cannot follow above advise and you need is to have append leading zeros in the database only here are two of my previous articles I suggest you to refer them. I am open to learn new tricks as these articles are almost three years old. Please share your opinion and suggestions in the comments area. SQL SERVER – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – UDF – Pad Ride Side of Number with 0 – Fixed Width Number Display Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Excel

    Read the article

  • SQL SERVER – How to easily work with Database Diagrams

    - by Pinal Dave
    Databases are very widely used in the modern world. Regardless of the complexity of a database, each one requires in depth designing. To practice along please Download dbForge Studio now.  The right methodology of designing a database is based on the foundations of data normalization, according to which we should first define database’s key elements – entities. Afterwards the attributes of entities and relations between them are determined. There is a strong opinion that the process of database designing should start with a pencil and a blank sheet of paper. This might look old-fashioned nowadays, because SQL Server provides a much wider functionality for designing databases – Database Diagrams. When using SSMS for working with Database Diagrams I realized two things – on the one hand, visualization of a scheme allows designing a database more efficiently; on the other – when it came to creating a big scheme, some difficulties occurred when designing with SSMS. The alternatives haven’t taken long to wait and dbForge Studio for SQL Server is one of them. Its functions offer more advantages for working with Database Diagrams. For example, unlike SSMS, dbForge Studio supports an opportunity to drag-and-drop several tables at once from the Database Explorer. This is my opinion but personally I find this option very useful. Another great thing is that a diagram can be saved as both a graphic file and a special XML file, which in case of identical environment can be easily opened on the other server for continuing the work. During working with dbForge Studio it turned out that it offers a wide set of elements to operate with on the diagram. Noteworthy among such elements are containers which allow aggregating diagram objects into thematic groups. Moreover, you can even place an image directly on the diagram if the scheme design is based on a standard template. Each of the development environments has a different approach to storing a diagram (for example, SSMS stores them on a server-side, whereas dbForge Studio – in a local file). I haven’t found yet an ability to convert existing diagrams from SSMS to dbForge Studio. However I hope Devart developers will implement this feature in one of the following releases. All in all, editing Database Diagrams through dbForge Studio was a nice experience and allowed speeding-up the common database designing tasks. Download dbForge Studio now. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >