Search Results

Search found 64482 results on 2580 pages for 'sql source control'.

Page 507/2580 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • How can I fake sql data while preserving statements without commenting my server-side code?

    - by Fedor
    I have to use hardcoded values for certain fields because at this moment we don't have access to the real data. When we do get access, I don't want to go through a lot of work uncommenting. Is it possible to keep this statement the way it is, except use '25' as the alias for ratecode? IF(special.ratecode IS NULL, br.ratecode, special.ratecode) AS ratecode, I have about 8 or so IF statements similar to this and I'm just too lazy ( even with vim ) to re-append while commenting out each if statement line by line. I would have to do this: $sql = 'SELECT u.*,'; // IF ( special.ratecode IS NULL, br.ratecode, special.ratecode) AS ratecode $sql.= '25 AS ratecode';

    Read the article

  • Can a SQL Server 2008 database support both a REST and SOAP web services within two different endpoints?

    - by PaulDecember
    Say you have a SQL Server 2008 database. You build a SOAP web service. You then deploy or publish this using Visual Studio 2010 in one website. Now, using the same database, you build a REST web service, in a different solution. You deploy this on another website. Can you consume the endpoints and/or .svc file of both the SOAP and REST web services, though they reference the same SQL Server 2008 database? I don't see why not, but before I go down this path and spend days I'd like to make sure. Also if there's a performance hit to the database if it is running both SOAP and REST at the same time--again, I don't see why it would matter, but I must make sure. Thanks.

    Read the article

  • How to log the raw SQL from Oracle occi C++ api?

    - by savanna
    One of our customers is complaining our application is not working. Their reasoning is that our sql function call to their Oracle database is not getting the "expected" result. Sometime, it should failed but our application get success from their database. It's really frustrating because it's their database and we cannot do any test on it. We are using the C++ Oracle OCCI API. Is there anyway we can log the raw sql from our end? That will be very helpful and we can ship the script to them and let them debug in their system to figure out the problem. Thanks in advance.

    Read the article

  • Do Distributed Version Control Systems promote poor backup habits?

    - by John
    In a DVCS, each developer has an entire repository on their workstation, to which they can commit all their changes. Then they can merge their repo with someone else's, or clone it, or whatever (as I understand it, I'm not a DVCS user). To me that flags a side-effect, of being more vulnerable to forgetting to backup. In a traditional centralised system, both you as a developer and the people in charge know that if you commit something, it's held on a central server which can have decent backup solutions in place. But using a DVCS, it seems you only have to push your work to a server when you feel like sharing it. It's all very well you have the repo locally so you can work on your feature branch for a month without bothering anyone, but it means (I think) that checking in your code to the repo is not enough, you have to remember to do regular pushes to a backed-up server. It also means, doesn't it, that a team lead can't see all those nice SVN commit emails to keep a rough idea what's going on in the code-base? Is any of this a real issue?

    Read the article

  • How to Merge two databases in one in MS SQL Server 2008?

    - by SzamDev
    Hi I have 2 PCs, each one of them has MS SQL Server 2008 installed on it and there is a database with data in it. I need a way that I can move data in my DB from this MS SQL Server to another one (another PC which has the same DB) move data from one PC to another one - There is one proplem, the ID column, because each DB in my 2 PCs has data in it so this column counts from 1,2,3,....... ( data will be conflict with other data in my DB ) Is there any way to solve my proplem and move data successfully?

    Read the article

  • Do you have to call .Save() when modifying a application setting that is bound to a control property

    - by Jordan S
    I am programming in .NET I have an application setting of type string. On my form I have a textbox. I bound the text property of the textbox to my application setting. If I type something in the textbox it changes the value that is held in the Application setting but the next time I start the program it goes back to the default value. Do I need to call Properties.Settings.Default.Save(); after the text is entered for the new value to be saved? Shouldn't it do this automatically? Is there a way I can make it do it automatically?

    Read the article

  • How to make UPDATE queries in LINQ to SQL?

    - by Alex
    I like using LINQ to SQL. The only problem is that I don't like the default way of updating tables. Let's say I have the following table with the following columns: ID (primary key), value1, value2, value3, value4, value5 When I need to update something I call UPDATE ... WHERE ID=@id LINQ to SQL call UPDATE ... WHERE ID=@id and value1=@value1 and value2=@value2 and value3=@value3 and value4=@value4 and value5=@value5 I can override this behavior by adding UpdateCheck=UpdateCheck.Never to every column, but with every update of the DataContext class with the GUI, this will be erased. Is there any way to tell LINQ to use this way of updating data?

    Read the article

  • how to get the second batch and 3rd batch in the same query result in oracle sql + yii framework?

    - by sasori
    let' say i have 20 results in the sql query. if am gonna use the limit in the yii active record, I'll obviously get the first four from the result, but what if i wanna get the 2nd four and then 3rd four in the same query result ? how to query that via sql ? e.g $criteria2 = new CDbCriteria(); $criteria2->select = 'USERID, ADID ,ADTYPE, ADTITLE, ADDESC, PAGEVIEW, DISPPUBLISHDATE'; $criteria2->addCondition("STATUS = 1"); $criteria2->order = '"t".PAGEVIEW DESC,"t".PUBLISHDATE DESC'; $criteria2->limit = 4; $criteria2->with = array('subcat','adimages'); $result = $this->findAll($criteria2); return $result;

    Read the article

  • Should custom data elements be stored as XML or database entries?

    - by meteorainer
    There are a ton of questions like this, but they are mostly very generalized, so I'd like to get some views on my specific usage. General: I'm building a new project on my own in Django. It's focus will be on small businesses. I'd like to make it somewhat customizble for my clients so they can add to their customer/invoice/employee/whatever items. My models would reflect boilerplate items that all ModelX might have. For example: first name last name email address ... Then my user's would be able to add fields for whatever data they might like. I'm still in design phase and am building this myself, so I've got some options. Working on... Right now the 'extra items' models have a FK to the generic model (Customer and CustomerDataPoints for example). All values in the extra data points are stored as char and will be coerced/parced into their actual format at view building. In this build the user could theoretically add whatever values they want, group them in sets and generally access them at will from the views relavent to that model. Pros: Low storage overhead, very extensible, searchable Cons: More sql joins My other option is to use some type of markup, or key-value pairing stored directly onto the boilerplate models. This coul essentially just be any low-overhead method weather XML or literal strings. The view and form generated from the stored data would be taking control of validation and reoganizing on updates. Then it would just dump the data back in as a char/blob/whatever. Something like: <datapoint type='char' value='something' required='true' /> <datapoint type='date' value='01/01/2001' required='false' /> ... Pros: No joins needed, Updates for validation and views are decoupled from data Cons: Much higher storage overhead, limited capacity to search on extra content So my question is: If you didn't live in the contraints impose by your company what method would you use? Why? What benefits or pitfalls do you see down the road for me as a small business trying to help other small businesses? Just to clarify, I am not asking about custom UI elements, those I can handle with forms and template snippets. I'm asking primarily about data storage and retreival of non standardized data relative to a boilerplate model.

    Read the article

  • Microsoft sort SQL Server 2014 CTP 2 et vante ses nouvelles capacités In-Memory permettant d'accélérer 30 fois les performances

    Microsoft sort SQL Server 2014 CTP 2 et vante ses nouvelles capacités In-Memory permettant d'accélérer 30 fois les performancesMicrosoft a profité de son salon Pass Summit 2013 dédié à SQL Server pour dévoiler la CTP 2 de sa plateforme de gestion de données moderne SQL Server 2014.SQL Server 2014 est conçu autour de trois objectifs majeurs : offrir un système de base de données « In-Memory », de nouvelles capacités Cloud pour simplifier l'adoption du Cloud Computing pour les bases de données SQL...

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • SSMS Tools Pack 2.1.0 is out. Added support for SQL Server 2012 RC0.

    - by Mladen Prajdic
    This version adds support for SQL Server 2012 RC0 and fixes a few bugs with SQL History. Because of the support for regions in SSMS 2012 the regions and debug sections feature has been removed from SSMS Tools Pack for SQL Server 2012. The feature is still available for previous SSMS versions. In other news SSMS Tools Pack has won the SQL Magazine bronze award for best free tool of 2011. You can view all the details at the SQL Server Magazine Award page. Thanx to all the people who voted for it. I'm glad you all like it and use it with great success. Also I've added a possibility for you to subscribe to email notifications in case the auto-updater doesn't work for you for some reason like being behind a proxy. Enjoy it!

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • NLOG to output db.out

    - by Coppermill
    I would like to use nLog to output my LINQ to SQL generated SQL to the log file e.g. db.Log = Console.Out reports the generated SQL to the console, http://www.bryanavery.co.uk/post/2009/03/06/Viewing-the-SQL-that-is-generated-from-LINQ-to-SQL.aspx How can I get the log to log to NLOG?

    Read the article

  • datacenter network change control best practices

    - by jpolache
    I have been tasked with compiling a list of possible network equipment changes at a data center. The task includes tagging which changes need change control and which don't. Does anyone know of a "best practices" list that I can start from? The methods for doing change control at this data center are well established. The list would be of specific configuration items that should or should not be included in the change control process, of example; static route entries switch port assignments firewall rule additions/changes etc.

    Read the article

  • Catalyst Control Center removal

    - by Allan
    I've recently tested a graphics card on a machine that required different drivers than my onboard integrated graphics, so I had catalyst control center installed. After I pulled out the graphics card and went back to using the onboard graphics, I deleted the catalyst center through control panel - programs.. which was fine. But apparently it didn't really remove it. it doesn't show up under programs anymore but when I reboot windows I get an error saying "The catalyst Control Center is not supported by the driver version of your enabled graphics adapter" and its still in the process bar... how do I permanently remove it?

    Read the article

  • Remote screen control software (such as VNC) with permission mechanism

    - by xuhdev
    Does anyone know a kind of software that allows controlling of other computer, but the server can configure permission control? For example, the server can define five users, one of them can control, but the other four of them can only view. Also, file transfer support is required. I've tried TightVNC, it works fine with anything else, but it just does not support user permission control. RAdmin can serve this job, but I wish to find some software which is free. Thank you!

    Read the article

  • Excel: Find a specific cell and paste the value from a control cell into it

    - by G-Edinburgh
    I have two columns one containing the room number, e.g. B-CL102, the other containing a varying integer. I want to enter a different, manually determined, integer in a third column. Whether by macro or native Excel, is there a way to use two control cells at the top of the sheet, type the room number into one and the different integer matching that room into another. I have minimal experience with macros essentially just the basics. I tried to use a V-Lookup formula to look at the two control cells (Range) and then fill in the new column, however I don't know how to then fix that value so that it doesn't change when I change the values in the control cells.

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >