Search Results

Search found 36773 results on 1471 pages for 'sql statement syntax'.

Page 874/1471 | < Previous Page | 870 871 872 873 874 875 876 877 878 879 880 881  | Next Page >

  • PASS: The Legal Stuff

    - by Bill Graziano
    I wanted to give a little background on the legal status of PASS.  The Professional Association for SQL Server (PASS) is an American corporation chartered in the state of Illinois.  In America a corporation has to be chartered in a particular state.  It has to abide by the laws of that state and potentially pay taxes to that state.  Our bylaws and actions have to comply with Illinois state law and United States law.  We maintain a mailing address in Chicago, Illinois but our headquarters is currently in Vancouver, Canada. We have roughly a dozen people that work in our Vancouver headquarters and 4-5 more that work remotely on various projects.  These aren’t employees of PASS.  They are employed by a management company that we hire to run the day to day operations of the organization.  I’ll have more on this arrangement in a future post. PASS is a non-profit corporation.  The term non-profit and not-for-profit are used interchangeably.  In a for-profit corporation (or LLC) there are owners that are entitled to the profits of a company.  In a non-profit there are no owners.  As a non-profit, all the money earned by the organization must be retained or spent.  There is no money that flows out to shareholders, owners or the board of directors.  Any money not spent in furtherance of our mission is retained as financial reserves. Many non-profits apply for tax exempt status.  Being tax exempt means that an organization doesn’t pay taxes on its profits.  There are a variety of laws governing who can be tax exempt in the United States.  There are many professional associations that are tax exempt however PASS isn’t tax exempt.  Because our mission revolves around the software of a single company we aren’t eligible for tax exempt status. PASS was founded in the late 1990’s by Microsoft and Platinum Technologies.  Platinum was later purchased by Computer Associates. As the founding partners Microsoft and CA each have two seats on the Board of Directors.  The other six directors and three officers are elected as specified in our bylaws. As a non-profit, our bylaws layout our governing practices.  They must conform to Illinois and United States law.  These bylaws specify that PASS is governed by a Board of Directors elected by the membership with two members each from Microsoft and CA.  You can find our bylaws as well as a proposed update to them on the governance page of the PASS web site. The last point that I’d like to make is that PASS is completely self-funded.  All of our $4 million in revenue comes from conference registrations, sponsorships and advertising.  We don’t receive any money from anyone outside those channels.  While we work closely with Microsoft we are independent of them and only derive a very small percentage of our revenue from them.

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • PASS Summit 2013 Review

    - by Ajarn Mark Caldwell
    As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year.  Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle.  Well, nothing could be further from the truth.  I cheered loudly when I first heard that the 2013 Summit would be outside Seattle.  I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately).  And there is a funny dynamic that takes place when the conference is local.  If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule.  For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year.  Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums.  Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel.  So I’m just saying, there are really nice aspects of having the conference 2500 miles away. Beyond that, the training was fantastic as usual.  The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen.  The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great.  This year I was particularly impressed with how well attended were the Professional Development sessions.  Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms.  I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people.  If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings. Overall, it was a great time as these events always are.  It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships).  We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule.  I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that. If you missed the Summit this year, start saving now, and register early, so you can join us!

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • Postgresql has broken apt-get on Ubuntu

    - by Raphie Palefsky-Smith
    On ubuntu 12.04, whenever I try to install a package using apt-get I'm greeted by: The following packages have unmet dependencies: postgresql-9.1 : Depends: postgresql-client-9.1 but it is not going to be instal led E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a so lution). apt-get install postgresql-client-9.1 generates: The following packages have unmet dependencies: postgresql-client-9.1 : Breaks: postgresql-9.1 (< 9.1.6-0ubuntu12.04.1) but 9.1.3-2 is to be installed apt-get -f install and apt-get remove postgresql-9.1 both give: Removing postgresql-9.1 ... * Stopping PostgreSQL 9.1 database server * Error: /var/lib/postgresql/9.1/main is not accessible or does not exist ...fail! invoke-rc.d: initscript postgresql, action "stop" failed. dpkg: error processing postgresql-9.1 (--remove): subprocess installed pre-removal script returned error exit status 1 Errors were encountered while processing: postgresql-9.1 E: Sub-process /usr/bin/dpkg returned an error code (1) So, apt-get is crippled, and I can't find a way out. Is there any way to resolve this without a re-install? EDIT: apt-cache show postgresql-9.1 returns: Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.6-0ubuntu12.04.1 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.6-0ubuntu12.04.1_amd64.deb Size: 4298270 MD5sum: 9ee2ab5f25f949121f736ad80d735d57 SHA1: 5eac1cca8d00c4aec4fb55c46fc2a013bc401642 SHA256: 4e6c24c251a01f1b6a340c96d24fdbb92b5e2f8a2f4a8b6b08a0df0fe4cf62ab Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.5-0ubuntu12.04 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.5-0ubuntu12.04_amd64.deb Size: 4298028 MD5sum: 3797b030ca8558a67b58e62cc0a22646 SHA1: ad340a9693341621b82b7f91725fda781781c0fb SHA256: 99aa892971976b85bcf6fb2e1bb8bf3e3fb860190679a225e7ceeb8f33f0e84b Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11220 Maintainer: Martin Pitt <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.3-2 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.3-2) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.3-2) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.3-2_amd64.deb Size: 4284744 MD5sum: bad9aac349051fe86fd1c1f628797122 SHA1: a3f5d6583cc6e2372a077d7c2fc7adfcfa0d504d SHA256: e885c32950f09db7498c90e12c4d1df0525038d6feb2f83e2e50f563fdde404a Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server

    Read the article

  • Apache Jmeter + Random Double

    - by Filipe Batista
    Is it possible to generate random double numbers in JMeter? I tried to use the Random in the config element where i have defined the Minimum value: 47.9999 (RND1) Maximum value: 30.9999 (RND2) Then in the selected Prepared Selected Statement i placed this values: Parameter values:${RND1},${RND1},${RND2} Parameter types:DOUBLE,DOUBLE,DOUBLE But it seems not work, because i receive an error: Response message: java.sql.SQLException: Cannot convert class java.lang.String to SQL type requested due to java.lang.NumberFormatException - For input string: "${RND1}"

    Read the article

  • Extracting data from Visual FoxPro databases

    - by whitequark
    I just got some 20Gb of data in a Visual FoxPro database with a custom frontend probably written in the same framework, and need to extract that data in any well-known format. I don't know anything about VFP in particular, but as it is SQL, there should be a way of opening an SQL console, or maybe an vfpdump utility. How can I do that? Everything I have now are a bunch of obscure binary files and a frontend executable.

    Read the article

  • ShareWebDb_log.ldf is 97GB - How to reduce?

    - by pierre
    We run an SBS 2008 server with WSS. On the drive I have set aside for WSS, I'm fast running out of space due to the ShareWebDb_log.ldf file: I've tried doing to do what I've read online - change the recovery mode, backup and truncate - but I can't actually see how I do this via the SQL Server Management Studio tool. Can anyone shed any light? The database for this file does not show in the list, and I cannot expand Management SQL Server Logs (because it's too large?).

    Read the article

  • ODBC Drivers Missing on Windows Sever 2003 64 bit OS

    - by Nagendra
    Hello All, I am working on a Windows Server 2003 x64 bit OS. Under ODBC connections, I am seeing only "SQL Server" and "SQL native Client". I wanted to make a datasource for xls. I have already performed the following things and there is no use :: Gone through the reply specified in JET and ODBC driver missing, can not get data from MDBs but there is no use for me. I think the link sol;ves the problem for a 32 bit OS. Installed Mdac 2.8 Any help on this would be greatly appreciated.

    Read the article

  • Databases supporting parallel processing across multiple servers

    - by David
    I need a database engine that can utilize multiple servers for processing a single SQL query in parallel. So far I know that this is possible with the some engines, though none of them are feasible for me either because of pricing or missing features. The engines currently known to me are: MS SQL (enterprise) DB2 (enterprise) Oracle (enterprise) GridSQL Greenplum

    Read the article

  • Mysql charset problem

    - by Newtonx
    I'm trying to import some data from one server to another. But when I do it, I'm having problems with charset. Words like Goiânia became Goiâni and conceição became conceição My Application was set to use latin1 charset Server 1 : MySQL Charset : UTF-8 Unicode (utf8) table collation : latin1_swedish_ci Server 2 : MySQL Charset: UTF-8 Unicode (utf8) table collation : latin1_swedish_ci Command I used to export data from server 1 mysqldump -u root -p --default-character-set=iso-8859-1 database_name db.sql Command used to restore to server 2 mysql -u root -p database_name < db.sql

    Read the article

  • pg_dump: Error message from server: ERROR: missing chunk number 0 for toast value 43712886 in pg_to

    - by Kirill Novozhilov
    After $ pg_dumpall -U postgres -f /tmp/pgall.sql I see following: pg_dump: SQL command failed pg_dump: Error message from server: ERROR: missing chunk number 0 for toast value 43712886 in pg_toast_16418 pg_dump: The command was: COPY public.page_parts (id, name, filter_id, content, page_id) TO stdout; pg_dumpall: pg_dump failed on database "radiant", exiting I haven't earlier backups. How can I fix it? Thanks in advance.

    Read the article

  • Ubuntu Launcher doesn't launch [closed]

    - by La Chamelle
    I use Ubuntu 11.04 and Gnome 2.32.1. I want to create a new launcher for Sql Developer on the desktop with the following value : Name : SqlDeveloper Command : /bin/sh /opt/sqldeveloper/sqldeveloper.sh Icon : A icon in the directory of sql developer When I click or double-click on the launcher nothing happens. $ ls -l /opt/sqldeveloper/sqldeveloper.sh -rwxr-xr-x /opt/sqldeveloper/sqldeveloper.sh What should I do ?

    Read the article

  • Powershell: Conditionally changing objects in the pipeline

    - by axk
    I'm converting a CSV to SQL inserts and there's a null-able text column which I need to quote in case it is not NULL. I would write something like the following for the conversion: Import-Csv data.csv | foreach { "INSERT INTO TABLE_NAME (COL1,COL2) VALUES ($($_.COL1),$($_.COL2));" >> inserts.sql } But I can't figure out how to add an additional tier into the pipeline to look if COL2 is not equal to 'NULL' and to quote it in such cases. How do I achieve such behavior?

    Read the article

  • How to bring Paging File usage metric to zero?

    - by AngryHacker
    I am trying to tune a SQL Server. Per Brent Ozar's Performance Tuning Video, he says the PerfMon's Paging File:%Usage should be zero or ridiculously close to it. The average metric on my box is around 1.341% The box has 18 GB of RAM, the SQL Server is off, the Commit Charge Total is 1GB and yet the PerfMon metric is not 0. The Performance of the Task Manager states that PF Usage is 1.23GB. What should I do to better tune the box?

    Read the article

  • Local service accounts

    - by Jeremy
    Normally we create service accounts in Active Directory, and if we install things like SQL Server, etc, we set the software services we set them to use those service accounts. The service accounts don't have the ability to be used to log into a workstation interactively. For Proof of concepts, we're installing SQL Server and other software on Virtual windows 7 workstations that aren't part of a domain, so we are creating local accounts that will be used by windows services. Is it possible to stop those users from appearing as options on the login screen?

    Read the article

  • Bringing TechNet licensed server into production

    - by David Heggie
    I am currently trialling an install of Windows Server 2008R2 + SQL Server 2008 R2 using license keys from my company's TechNet subscription. If the trial is successful, I'd like to bring the server into production as is (there's a lot of config needed for it that I don't want to repeat) so my question really is is it possible to change the Windows and SQL server licenses from TechNet ones to "proper" volume licenses and legally use the server in production? Or do I have to reinstall everything with volume licences?

    Read the article

  • Extracting data from Visual FoxPro databases

    - by whitequark
    I just got some 20Gb of data in a Visual FoxPro database with a custom frontend probably written in the same framework, and need to extract that data in any well-known format. I don't know anything about VFP in particular, but as it is SQL, there should be a way of opening an SQL console, or maybe an vfpdump utility. How can I do that? Everything I have now are a bunch of obscure binary files and a frontend executable.

    Read the article

  • mysqldump is not dumping my data

    - by oompahloompah
    I am running mysqldump on Ubuntu Linux (10.0.4 LTS) my mySQL version info is: mysql Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 I used the following command: mysql -u username -p dbname dbname_backup.sql However when I opened the generated .sql file, I saw that most of the tables had only the schema dumped and in the few cases where the actual data was dumped, only 1 or two records were dumped (there are ATLEAST several tens of records in each table). Does anyone know what maybe going on?

    Read the article

  • Restoring MySQL dump - ERROR 2006 (HY000)

    - by matcheek
    I am just trying to restore a mysql dump. Below are the command and error message. Can anybody give me some clues how to approach this? 10:54:16 Restoring C:\Users\matcheek\Documents\dumps\Dump20120405-1.sql Running: mysql.exe "--defaults-extra-file="d:\temp\tmpbvhy4i.cnf" " --host=127.0.0.1 --user=root --port=3306 --default-character-set=utf8 --comments < "C:\\Users\\matcheek\\Documents\\dumps\\Dump20120405-1.sql" ERROR 2006 (HY000) at line 271: MySQL server has gone away

    Read the article

  • Converting MSDN license to full, commercial license

    - by alex
    I had to throw a machine together in a bit of a hurry- to replace a machine that suddenly failed (no one had bothered to keep a "warm" backup) It has Windows Server 2008 and SQL 2008 The snag is, I installed them off our MSDN subscription media, due to me not having "licensed" software. I need to put this machine into production. We are in the process of buying the licenses from a MS reseller now. Is there a way to "convert" the MSDN license to production on both Windows Server and SQL?

    Read the article

  • Problem with host-only in Virtual Box

    - by Kamiar
    Hi folks I have virtualbox software in my computer and I have windows 7 in host. my virtual has windows server 2008 and Sql server 2008 I have client SQL version in windows 7(host computer) but as you know virtualbox works in host-only network. I need Connection host to guest but I can not see virtual computer(win2008) from host (win7). Is there any solution ?

    Read the article

  • ORA-12705 with OracleXE & Windows 7 & GlassFish

    - by bao
    I hate this problem... pls help! I have: GlassFish v3 (build 74.2) Windows 7 Pro english Oracle XE 10.2.0 settings: SQL select * from nls_database_parameters; PARAMETER VALUE NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET WE8MSWIN1252 NLS_CALENDAR GREGORIAN NLS_DATE_FORMAT DD-MON-RR NLS_DATE_LANGUAGE AMERICAN NLS_SORT BINARY NLS_TIME_FORMAT HH.MI.SSXFF AM PARAMETER VALUE NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ NLS_COMP BINARY NLS_LENGTH_SEMANTICS BYTE NLS_NCHAR_CONV_EXCP FALSE NLS_NCHAR_CHARACTERSET AL16UTF16 NLS_RDBMS_VERSION 10.2.0.1.0 20 rows selected. SQL HOST ECHO %NLS_LANG% AMERICAN_AMERICA.WE8MSWIN1252 NLS_LANG in registry is AMERICAN_AMERICA.WE8MSWIN1252 ORACLE_HOME in registry is C:\oraclexe\app\oracle\product\10.2.0\server I am creating Connection pool in Glassfish admin web GUI, trying to ping.. This error in log: [#|2010-05-15T23:19:26.958+0400|WARNING|glassfishv3.0|javax.enterprise.resource.resourceadapter.com.sun.enterprise.connectors.service|_ThreadID=29;_ThreadName=Thread-1;|RAR8054: Exception while creating an unpooled [test] connection for pool [ phut ], Connection could not be allocated because: ORA-00604: error occurred at recursive SQL level 1 ORA-12705: Cannot access NLS data files or invalid environment specified |#] HOW TO FIX??

    Read the article

  • Problem using SQLDMO/Vb6 against SQL2008

    - by E.J. Brennan
    I have a client, that uses SQLDMO for a portion of a custom application that was written against SQL 2000, and they recently upgraded to SQL2008. The majority of the app still runs fine (doesn't use SQLDMO), but the admin functions which rely on SQLDMO stopped working. I installed the SQL2005 backward compatibility pack, and now SQLDMO partially works, i.e. I can run "select" type queries, but any "Update" queries fail with the error message: to connect to the server you must use SQL Server management studio or sql server management objects (SMO) Any thoughts? Should the backward compatibility pack give me ALL the functionality back, or is this a known issue? BTW: I realize SQLDMO has been deprecated and will go away next release, none-the-less I need to do what I can to solve the problem at hand.

    Read the article

< Previous Page | 870 871 872 873 874 875 876 877 878 879 880 881  | Next Page >