Search Results

Search found 57000 results on 2280 pages for 'alter database open'.

Page 416/2280 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • Implementing "select distinct ... from ..." over a list of Python dictionaries

    - by daveslab
    Hi folks, Here is my problem: I have a list of Python dictionaries of identical form, that are meant to represent the rows of a table in a database, something like this: [ {'ID': 1, 'NAME': 'Joe', 'CLASS': '8th', ... }, {'ID': 1, 'NAME': 'Joe', 'CLASS': '11th', ... }, ...] I have already written a function to get the unique values for a particular field in this list of dictionaries, which was trivial. That function implements something like: select distinct NAME from ... However, I want to be able to get the list of multiple unique fields, similar to: select distinct NAME, CLASS from ... Which I am finding to be non-trivial. Is there an algorithm or Python included function to help me with this quandry? Before you suggest loading the CSV files into a SQLite table or something similar, that is not an option for the environment I'm in, and trust me, that was my first thought.

    Read the article

  • FastObjects.NET(is an OODB from Versant) performence in Real Scenerios?

    - by Lalit
    FastObjects.NET Saves the whole class object(if marked with attribute Persistent) at once in file system(using serilization or similar technology). They are promissing that it is even faster then normal SQL DB approach. My team also thought it is better and faster to save the whole object once instead of each field one by one. Defination of their website: FastObjects .NET 10.0 fully conforms to the Microsoft.NET 2.0 framework. Tightly integrated with Visual Studio 2005, it offers a developer-friendly, object-oriented alternative to a relational database for .NET persistence. I want to have your experiences of using FastObjects in production scenerio? They are promising for Indexing/Transaction/clustoring/replication.

    Read the article

  • Update multiple values in a single statement

    - by Kluge
    I have a master / detail table and want to update some summary values in the master table against the detail table. I know I can update them like this: update MasterTbl set TotalX = (select sum(X) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) update MasterTbl set TotalY = (select sum(Y) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) update MasterTbl set TotalZ = (select sum(Z) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID) But, I'd like to do it in a single statement, something like this: update MasterTbl set TotalX = sum(DetailTbl.X), TotalY = sum(DetailTbl.Y), TotalZ = sum(DetailTbl.Z) from DetailTbl where DetailTbl.MasterID = MasterTbl.ID group by MasterID but that doesn't work. I've also tried versions that omit the "group by" clause. I'm not sure whether I'm bumping up against the limits of my particular database (Advantage), or the limits of my SQL. Probably the latter. Can anyone help?

    Read the article

  • Delphi - change text color based on current record value

    - by Welliam
    using db controls connected to a FireBird database. and I have a simple dblabel I want to change text color based on current value for current record The user navigate using the dbnavigator and I wrote code in the navigator button click. but there is a problem the code always read the previous record value not the current one so the color is wrong !! for example: procedure <navigator button click>; begin if table1.FieldByName('field1').AsString = 'val1' then <dblabel.textcolor> := red else <dblabel.textcolor> := green; end; but as I said the value is one record behind. why is that and what is the best approach to change label text color ? Thanks

    Read the article

  • PHP Booking timeslot

    - by boyee007
    Im developing a php booking system based on timeslot for daily basis. Ive set up 4 database tables! Bookslot (which store all the ids - id_bookslot, id_user, id_timeslot) Timeslot (store all the times on 15 minutes gap ex: 09:00, 09:15, 09:30, etc) Therapist (store all therapist details) User (store all the members detail) ID_BOOKSLOT ID_USER ID_THERAPIST ID_TIMESLOT 1 10 1 1 (09:00) 2 11 2 1 (09:00) 3 12 3 2 (09:15) 4 15 3 1 (09:00) Now, my issue is, it keep showing repeation for timeslot when i want echoing the data for example: thera a thera b thera c ------------------------------------------------- 09:00 BOOKED available available 09:00 available BOOKED available 09:00 available available BOOKED 09:15 available BOOKED available as you can see, 09:00 showing three times, and i want something like below thera a thera b thera c ------------------------------------------------- 09:00 BOOKED BOOKED BOOKED 09:15 available BOOKED available There might be something wrong with joining the table or else. The code to join the table $mysqli->query("SELECT * FROM bookslot RIGHT JOIN timeslot ON bookslot.id_timeslot = timeslot.id_timeslot LEFT JOIN therapist ON bookslot.id_therapist = therapist.id_therapist" if anyone have the solution for this system, please help me out and i appriciate it much!

    Read the article

  • How to break up a table holding 100mil+ number of records?

    - by Chiao
    We're currently storing answers for 52 predefined questions for our clients in our matchmaking site. we have over 30million unique users summing up for worst case of a 52x30million rows. Of these 52 questions, 11 are required and always answered. Our previous solution was to open an answer table for each question. This solution distributed our answer rows for faster insert/delete/update. But it also caused us an unconventional programming such as dynamically opening a table each time a question is added/updated, or removing an answer table if it was to be destroyed permanently. We want to come up with a better solution for our third version but could't get very far yet. Any ideas to accomplish this in any other, perhaps a more conventional, way?

    Read the article

  • What would be the most efficient way to do this search (mysql or text)?

    - by alex
    Suppose I have 500 rows of data, each with a paragraph of text (like this paragraph). That's it.I want to do a search that is not only based on words. (%LIKE%, not FULL_TEXT) What would be faster? SELECT * FROM ...WHERE LIKE "%query%"; This would put load on the database server. Select all. Then, go through each one and do .find = 0 This would put load on the web server. This is a website, and people will be searching frequently.

    Read the article

  • Remove seconds from the TIME function?

    - by user1876032
    Okay so have a database that uses the time functions for a list of events and then displays the data as: echo "<b>Time Frame:"; echo "$time_start"; echo "&nbsp;-&nbsp;"; echo "$time_end"; Although that displays the time as 11:00:00 if i want the time to be 11:00 am. Is there anyway to make this time display as a standard "11:00"? and also if in the mysql datababse enter it as military time (ex. 13:00) to make it display 1:00pm? I have tried many things. Please help. http://pastebin.com/kaTZzGrx

    Read the article

  • Rails - column not found for defined 'has_many' relationship

    - by Guanlun
    I define a Post class which can link or be linked to multiple posts. To do this I added a PostLink class which specifies post_to and post_from. I generated the PostLink class by rails g model post_link from_post:integer to_post:integer and of course rake db:migrate, and added belongs_to :from_post, :class_name => 'Post' belongs_to :to_post, :class_name => 'Post' to the class. And I also have has_many :post_links in my Post class. I ran rails console and Post.new.post_links and got nil printed out, which is expected. However after I save a Post using p = Post.new p.save and then run p.post_links, it prints out the following error message: SQLite3::SQLException: no such column: post_links.post_id: SELECT "post_links".* FROM "post_links" WHERE "post_links"."post_id" = 1 So anybody know why after saving it to the database post_link can not be accessed?

    Read the article

  • A step-up from TiddlyWiki that is still 100% portable?

    - by Smandoli
    TiddlyWiki is a great idea, brilliantly implemented. I'm using it as a portable personal "knowledge manager," and these are the prize virtues: It travels on my USB flash memory stick and runs on any computer, regardless of operating system No software installation is needed on the computer (TiddlyWiki merely uses the Internet browser) No Internet connection is needed In terms of data retrieval functionality, it mimics a relational database (use of tags and internal links) Let's say I've got a million words of prose in 4,000 tiddlers (posts). I'm still testing, but it looks like TiddlyWiki gets very slow. Is there an app like TiddlyWiki that keeps all the virtues I listed above, and allows more storage? NOTE: Separation of content and presentation would be ideal. It's nifty that TiddlyWiki has everything in a single HTML document, but it's unhelpful in many ways. I don't care if a directory of assorted docs is needed (SQLite, XML?), as long as it's functionally self-contained.

    Read the article

  • I just restarted Apache and now the server is down

    - by James
    I am pretty terrified right now. I'm scared I'm going to get a call in a couple minutes from a hundred people saying the website doesn't work. I was at the terminal changing some configuration files when I went to restart the server to update the .conf files with this command: /etc/init.d/apache2 graceful After I ran that, none of the websites work and I have no idea what to do. There are about 100 errors I am getting according to the log files. They all begin with "PHP Notice" and most relate to "use of undefined constant" Also, I just spoke with a coworker, describing what I did, and he noticed that there are two installations of apache on the server and that I restarted the one that we don't use. This is what the error log says (assuming it's the correct error log): [Wed Jan 05 11:52:06 2011] [notice] Graceful restart requested, doing restart Warning: DocumentRoot [/u/apps/staging/antetr/current/public/] does not exist [Wed Jan 05 11:52:08 2011] [warn] NameVirtualHost *:80 has no VirtualHosts (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs FINAL UPDATE: Ok, I fixed it. The problem was (as you experts facepalming probably know) that it couldn't access an error log in the directory I was working in. I created an empty error log file and tried the restart command again and now all the sites are back up... Though my original problem is still there.. Thanks to all those who offered advice, it really helped and let me breathe for a moment.

    Read the article

  • I just restarted Apache and now the server is down

    - by James
    I am pretty terrified right now. I'm scared I'm going to get a call in a couple minutes from a hundred people saying the website doesn't work. I was at the terminal changing some configuration files when I went to restart the server to update the .conf files with this command: /etc/init.d/apache2 graceful After I ran that, none of the websites work and I have no idea what to do. There are about 100 errors I am getting according to the log files. They all begin with "PHP Notice" and most relate to "use of undefined constant" Also, I just spoke with a coworker, describing what I did, and he noticed that there are two installations of apache on the server and that I restarted the one that we don't use. This is what the error log says (assuming it's the correct error log): [Wed Jan 05 11:52:06 2011] [notice] Graceful restart requested, doing restart Warning: DocumentRoot [/u/apps/staging/antetr/current/public/] does not exist [Wed Jan 05 11:52:08 2011] [warn] NameVirtualHost *:80 has no VirtualHosts (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs (2)No such file or directory: apache2: could not open error log file /u/apps/production/madfilmdash/current/log/apache-error.log. Unable to open logs FINAL UPDATE: Ok, I fixed it. The problem was (as you experts facepalming probably know) that it couldn't access an error log in the directory I was working in. I created an empty error log file and tried the restart command again and now all the sites are back up... Though my original problem is still there.. Thanks to all those who offered advice, it really helped and let me breathe for a moment.

    Read the article

  • .LazyInitializationException when adding to a list that is held within a entity class using hibernat

    - by molleman
    Right so i am working with hibernate gilead and gwt to persist my data on users and files of a website. my users have a list of file locations. i am using annotations to map my classes to the database. i am getting a org.hibernate.LazyInitializationException when i try to add file locations to the list that is held in the user class. this is a method below that is overridden from a external file upload servlet class that i am using. when the file uploads it calls this method. the user1 is loaded from the database elsewhere. the exception occurs at user1.getFileLocations().add(fileLocation); . i dont understand it really at all.! any help would be great. the stack trace of the error is below public String executeAction(HttpServletRequest request, List<FileItem> sessionFiles) throws UploadActionException { for (FileItem item : sessionFiles) { if (false == item.isFormField()) { try { YFUser user1 = (YFUser)getSession().getAttribute(SESSION_USER); // This is the location where a file will be stored String fileLocationString = "/Users/Stefano/Desktop/UploadedFiles/" + user1.getUsername(); File fl = new File(fileLocationString); fl.mkdir(); // so here i will create the a file container for my uploaded file File file = File.createTempFile("upload-", ".bin",fl); // this is where the file is written to disk item.write(file); // the FileLocation object is then created FileLocation fileLocation = new FileLocation(); fileLocation.setLocation(fileLocationString); //test System.out.println("file path = "+file.getPath()); user1.getFileLocations().add(fileLocation); //the line above is where the exception occurs } catch (Exception e) { throw new UploadActionException(e.getMessage()); } } removeSessionFileItems(request); } return null; } //This is the class file for a Your Files User @Entity @Table(name = "yf_user_table") public class YFUser implements Serializable,ILightEntity { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "user_id",nullable = false) private int userId; @Column(name = "username") private String username; @Column(name = "password") private String password; @Column(name = "email") private String email; @ManyToMany(cascade = CascadeType.ALL) @JoinTable(name = "USER_FILELOCATION", joinColumns = { @JoinColumn(name = "user_id") }, inverseJoinColumns = { @JoinColumn(name = "locationId") }) private List<FileLocation> fileLocations = new ArrayList<FileLocation>() ; public YFUser(){ } public int getUserId() { return userId; } private void setUserId(int userId) { this.userId = userId; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public List<FileLocation> getFileLocations() { if(fileLocations ==null){ fileLocations = new ArrayList<FileLocation>(); } return fileLocations; } public void setFileLocations(List<FileLocation> fileLocations) { this.fileLocations = fileLocations; } /* public void addFileLocation(FileLocation location){ fileLocations.add(location); }*/ @Override public void addProxyInformation(String property, Object proxyInfo) { // TODO Auto-generated method stub } @Override public String getDebugString() { // TODO Auto-generated method stub return null; } @Override public Object getProxyInformation(String property) { // TODO Auto-generated method stub return null; } @Override public boolean isInitialized(String property) { // TODO Auto-generated method stub return false; } @Override public void removeProxyInformation(String property) { // TODO Auto-generated method stub } @Override public void setInitialized(String property, boolean initialised) { // TODO Auto-generated method stub } @Override public Object getValue() { // TODO Auto-generated method stub return null; } } @Entity @Table(name = "fileLocationTable") public class FileLocation implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "locationId", updatable = false, nullable = false) private int ieId; @Column (name = "location") private String location; /* private List uploadedUsers = new ArrayList(); */ public FileLocation(){ } public int getIeId() { return ieId; } private void setIeId(int ieId) { this.ieId = ieId; } public String getLocation() { return location; } public void setLocation(String location) { this.location = location; } /* public List getUploadedUsers() { return uploadedUsers; } public void setUploadedUsers(List<YFUser> uploadedUsers) { this.uploadedUsers = uploadedUsers; } public void addUploadedUser(YFUser user){ uploadedUsers.add(user); } */ } Apr 2, 2010 11:33:12 PM org.hibernate.LazyInitializationException <init> SEVERE: failed to lazily initialize a collection of role: com.example.client.YFUser.fileLocations, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.example.client.YFUser.fileLocations, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:365) at org.hibernate.collection.AbstractPersistentCollection.write(AbstractPersistentCollection.java:205) at org.hibernate.collection.PersistentBag.add(PersistentBag.java:297) at com.example.server.TestServiceImpl.saveFileLocation(TestServiceImpl.java:132) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at net.sf.gilead.gwt.PersistentRemoteService.processCall(PersistentRemoteService.java:174) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) Apr 2, 2010 11:33:12 PM net.sf.gilead.core.PersistentBeanManager clonePojo INFO: Third party instance, not cloned : org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.example.client.YFUser.fileLocations, no session or session was closed

    Read the article

  • MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi

    MySQL 5.5 : sortie imminente ? Oracle devrait annoncer la nouvelle version du SGBD open-source mercredi Mise à jour du 13/12/10 Ce mercredi, Oracle organise un webinar pour présenter « une mise à jour importante de MySQL ». Tomas Ulin, Vice-Président du développement de MySQL et Rob Young, Senior Product Manager, y dévoileront les dernières avancées du SGBD open-source que le géant des bases de données à récupérée avec le rachat de Sun. Oracle avait annoncé une RC de MySQL 5,5 lors de l'Oracle OpenWorld de septembre (lire ci-avant). Cette fois-ci, les responsables du projets pourraient annoncer sa disponibilité officielle.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Salesforce.com s'attaque à Oracle avec Database.com, un service qui veut devenir « l'avenir des bases de données »

    Salesforce.com s'attaque à Oracle avec Database.com Une base de données 100 % hébergée qui veut révolutionner les SGBD Lors de sa conférence Dreamforce, qui se déroule actuellement à San Francisco, Salesforce.com, le plus célèbre éditeur de CRM en mode Cloud, vient de présenter un produit extrêmement ambitieux, qualifié par la société d'« avenir des bases de données ». Il s'agit de Database.com, le premier SGBD 100 % Cloud. La plate-forme veut supprimer les problématiques de l'optimisation et de la maintenance des bases de données et du matériel traditionnels. « Les bases de données Cloud représentent une opportunité majeure pour faciliter ...

    Read the article

  • Google sort « Bali », un nouveau SDK pour le VP8 : le codec vidéo open-source gagne en vitesse d'encodage et en qualité

    Google sort « Bali », un nouveau SDK pour le VP8 Le codec vidéo open-source gagne en vitesse d'encodage et en qualité Mise à jour du 10/03/11 Google vient de mettre à jour son environnement de développement autour du VP8, le codec vidéo open-source derrière WebM. Selon Google, ce nouveau SDK permet d'encoder une fois et demi plus vite des vidéos qu'avec la précédente version (Aylesbury, lire ci-avant) sur les plate-formes x86. Autre nouveauté majeure de « Bali » (nom de code de cette version), l'amélioration du multithreading et de la prise en charge du multi-coeur. Ici aussi, l'objectif était d'augmenter les vitesses d'en...

    Read the article

  • Download SQL Server 2008 R2 Express (Database Size Limit Increased to 10GB! )

    - by Aamir Hasan
    Yesterday i was researching about SQL Server 2008. i found New release of MS SQL Server 2008 R2, which have many new BI features and enhancements. There is a tiny cute feature that I am sure all of us will appreciate a lot. The product team has increased the Database Size limit for SQL Server 2008 R2 Express from 4 GB to 10 GB. So if you have got a growing SQL Server Express database that is close to the 4 GB Limit, hurry, upgrade to R2 Express. See the announcement from Product Team. SQL Server 2008 R2 Express download. SQL Server 2008 R2 Express Download

    Read the article

  • How can I re-open the TrueCrypt window after it's been closed?

    - by user27451
    I installed TrueCrypt 7.1 Standard 64-bit on a fresh install of Ubuntu 11.10 64-bit. After finding the application in the dash I dragged it's icon onto the Unity launcher. I then clicked that icon and TrueCrypt's main window opened. I mounted my encrypted file/volume and then closed the window to do some work. To re-open the TrueCrypt window I would normally click the small blue TrueCrypt icon that appears on the top panel. In Ubuntu 11.10 that icon is no longer there. I receive a message ("TrueCrypt is already running.") if I click on the TrueCrypt icon in the launcher. How can I re-open the TrueCrypt window after it's been closed in Ubuntu 11.10?

    Read the article

  • « Linux a échoué sur le Desktop » pour le créateur de GNOME, un avis tranché qui divise la communauté open source

    « Linux a échoué sur le Desktop » pour le créateur de GNOME un avis tranché qui divise la communauté open source Miguel De Icaza, l'un des créateurs et meneur du développement de l'environnement de bureau libre pour Linux GNOME estime dans un article que « Linux est un échec en tant qu'OS grand public ». Un point de vue qui n'a pas manqué de créer une grosse polémique dans le monde de l'open source, entrainant des critiques acerbes de la part de Linus Torvalds. Déjà connu pour son franc-parler et son gout pour la polémique, Miguel De Icaza dans un long billet de blog intitulé « ce qui a tué le noyau Linux », fustige la communauté Linux et les choix de développement de celle...

    Read the article

  • how to get the application listed on open with menu?

    - by user17182377182122092121021929
    I have installed the sublime text 2 in sublime text by http://www.technoreply.com/how-to-install-sublime-text-2-on-ubuntu-12-04-unity/ way. now I can see them on when I click Win button and search for it. Same time I want to see them in open with context. I means when I right click on file and choose. open with other application how I can choose the sublime text. I can't see the option their to do this. Thanks for help in advance.

    Read the article

  • Different database for Membership and our web data or use just one?

    - by Jesus Rodriguez
    Is better to keep our Membership stuff on the DefaultConnection and create another connection (another database) for our data? Or just one database for all? If I have a MyAppContext and I want migrations for that context, It seems that I cannot have migrations for UserContext (In other words, I can just migrate one context) So, having two different databases I can migrate or the users (maybe membership migration is weird) or the web data. Or, I can mix the UserContext and MyAppContext in one UserAndAppContext and migrate all in one place, but this mixing also seems weird. What's the normal way to do this, one or two databases and what should be migrated?

    Read the article

  • Where did ULSTraceLog go to in the SharePoint 2010 Logging Database?

    The Logging Database is one of the many new concepts that will make the life of many SharePoint administrators quite a bit more enjoyable. In SharePoint 2007 the Unified Logging System (ULS) logged all of its data to text files, typically found on your SharePoint server in 12\LOGS. We still have that in SharePoint 2010, but besides those text files, ULS can also write the data to a database! The advantages are obvious: easy to query, one central location for all servers in the farm, easy to build...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SSIS: Building SQL databases on-the-fly using concatenated SQL scripts

    - by DrJohn
    Over the years I have developed many techniques which help automate the whole SQL Server build process. In my current process, where I need to build entire OLAP data marts on-the-fly, I make regular use of a simple but very effective mechanism to concatenate all the SQL Scripts together from my SSMS (SQL Server Management Studio) projects. This proves invaluable because in two clicks I can redeploy an entire SQL Server database with all tables, views, stored procedures etc. Indeed, I can also use the concatenated SQL scripts with SSIS to build SQL Server databases on-the-fly. You may be surprised to learn that I often redeploy the database several times per day, or even several times per hour, during the development process. This is because the deployment errors are logged and you can quickly see where SQL Scripts have object dependency errors. For example, after changing a table structure you may have forgotten to change any related views. The deployment log immediately points out all the objects which failed to build so you can fix and redeploy the database very quickly. The alternative approach (i.e. doing changes in the database directly using the SSMS UI) would require you to check all dependent objects before making changes. The chances are that you will miss something and wonder why your app returns the wrong data – a common problem caused by changing a table without re-creating dependent views. Using SQL Projects in SSMS A great many developers fail to make use of SQL Projects in SSMS (SQL Server Management Studio). To me they are invaluable way of organizing your SQL Scripts. The screenshot below shows a typical SSMS solution made up of several projects – one project for tables, another for views etc. The key point is that the projects naturally fall into the right order in file system because of the project name. The number in the folder or file name ensures that the projects the SQL scripts are concatenated together in the order that they need to be executed. Hence the script filenames start with 100, 110 etc. Concatenating SQL Scripts To concatenate the SQL Scripts together into one file, I use notepad.exe to create a simple batch file (see example screenshot) which uses the TYPE command to write the content of the SQL Script files into a combined file. As the SQL Scripts are in several folders, I simply use several TYPE command multiple times and append the output together. If you are unfamiliar with batch files, you may not know that the angled bracket (>) means write output of the program into a file. Two angled brackets (>>) means append output of this program into a file. So the command-line DIR > filelist.txt would write the content of the DIR command into a file called filelist.txt. In the example shown above, the concatenated file is called SB_DDS.sql If, like me you place the concatenated file under source code control, then the source code control system will change the file's attribute to "read-only" which in turn would cause the TYPE command to fail. The ATTRIB command can be used to remove the read-only flag. Using SQLCmd to execute the concatenated file Now that the SQL Scripts are all in one big file, we can execute the script against a database using SQLCmd using another batch file as shown below: SQLCmd has numerous options, but the script shown above simply executes the SS_DDS.sql file against the SB_DDS_DB database on the local machine and logs the errors to a file called SB_DDS.log. So after executing the batch file you can simply check the error log to see if your database built without a hitch. If you have errors, then simply fix the source files, re-create the concatenated file and re-run the SQLCmd to rebuild the database. This two click operation allows you to quickly identify and fix errors in your entire database definition.Using SSIS to execute the concatenated file To execute the concatenated SQL script using SSIS, you simply drop an Execute SQL task into your package and set the database connection as normal and then select File Connection as the SQLSourceType (as shown below). Create a file connection to your concatenated SQL script and you are ready to go.   Tips and TricksAdd a new-line at end of every fileThe most common problem encountered with this approach is that the GO statement on the last line of one file is placed on the same line as the comment at the top of the next file by the TYPE command. The easy fix to this is to ensure all your files have a new-line at the end.Remove all USE database statementsThe SQLCmd identifies which database the script should be run against.  So you should remove all USE database commands from your scripts - otherwise you may get unintentional side effects!!Do the Create Database separatelyIf you are using SSIS to create the database as well as create the objects and populate the database, then invoke the CREATE DATABASE command against the master database using a separate package before calling the package that executes the concatenated SQL script.    

    Read the article

  • SQL – What ACID stands in the Database? – Contest to Win 24 Amazon Gift Cards and Joes 2 Pros 2012 Kit

    - by Pinal Dave
    We love puzzles. One of the brain’s main task is to solve puzzles. Sometime puzzles are very complicated (e.g Solving Rubik Cube or Sodoku)  and sometimes the puzzles are very simple (multiplying 4 by 8 or finding the shortest route while driving). It is always to solve puzzle and it creates an experience which humans are not able to forget easily. The best puzzles are the one where one has to do multiple things to reach to the final goal. Let us do something similar today. We will have a contest where you can participate and win something interesting. Contest This contest have two parts. Question 1: What ACID stands in the Database? This question seems very easy but here is the twist. Your answer should explain minimum one of the properties of the ACID in detail. If you wish you can explain all the four properties of the ACID but to qualify you need to explain minimum of the one properties. Question 2: What is the size of the installation file of NuoDB for any specific platform. You can answer this question following format – NuoDB installation file is of size __ MB for ___ Platform. Click on the Download the Link and download your installation file for NuoDB. You can post figure out the file size from the properties of the file. We have exciting content prizes for the winners. Prizes 1) 24 Amazon Gift Cards of USD 10 for next 24 hours. One card at every hour. (Open anywhere in the world) 2) One grand winner will get Joes 2 Pros SQL Server 2012 Training Kit worth USD 249. (Open where Amazon ship books). Amazon | 1 | 2 | 3 | 4 | 5  Rules The contest will be open till July 21, 2013. All the valid comments will be hidden till the result is announced. The winners will be announced on July 24, 2013. Hint: Download NuoDB  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >