Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 665/3184 | < Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >

  • MSSQL Timing out, a couple of questions...

    - by user29000
    Hi, About once a week, my MSSQL server is timing out, or rather the machine runs out of RAM. This morning it reached 3.9GB of the available 4, with MSSQL taking up 2.5GB. I'm concerned that i've not configured SQL to release memory as it should, so I ran sp_who2 while the timeouts were occuring to see what process were running. If i could post the CSV datafile i would, however, there were 85 processes in total, mostly related to the Full Text service: FT Gatherer - About 35 of these running under the 'sa' account against the master database with status of either sleeping or background, many were dependant on other processes. Is that normal? MySite database - There were only 5 processes for the one active site/database and all were either sleeping or suspended - but their lastBatch dates were set to 1/12/2020. Is that normal? The datbase is only about 20mb in size the traffic levels are very low, so i'm thinking of maybe limiting the amount of RAM SQL has access to (from unlimted to maybe 2GB). Any thoughts / advise would be appreciated. Mny thanks Ben

    Read the article

  • Is the set of data always normalized in one form or the other in Databases

    - by manugupt1
    Suppose I have a set of data, given the data and the relation schemas can I assume that the set of data is normalized in one form or the other. In my opinion raw data given, has to be normalized into some form. However a discussion with a friend has led to ask me this question here. To expound more on the question, I would say given a set of functional dependencies for a relation or table, is it guaranteed that the table would atleast be in 1NF if not others

    Read the article

  • C# - Sharing static data between multiple processes

    - by Murtaza Mandvi
    I have a WCF service (instantiated within a Console application on NetTCP), this service has static data (large volume) which gets instantiated on the load. I have multiple instances of this Console application running at once, and all of them are doing the same static data initialization , is there a way that I can have a single data source and share the data among processes so that each process does not have to consume large amount of memory?

    Read the article

  • Write Java objects to file

    - by Mark Szymanski
    Is it possible to write objects in Java to a binary file? The objects I want to write would be 2 arrays of String objects. The reason I want to do this is to save persistent data. If there is some easier way to do this let me know. Thanks in advance!

    Read the article

  • Managing persistent data on an Amazon EC2 web server

    - by Derek
    I've just started trying out Amazon's EC2 service for running an asp.net web app which uses a SQL Server 2005 Express database. I have some questions about how to configure and operate it best for reliability, and I'm hoping to tap into some collective wisdom here as this is my first foray into EC2. Here's how I have it configured currently: OS: Windows 2003 SQL Server Express 2005 Web content stored on an EBS Volume (E Drive) Database Data stored on an EBS Volume (E Drive) Database backups to "C Drive" and then copied off to S3. Elastic IP Address attached to the production instance. Now when I make a change to the OS configuration, I make a new AMI using the bundle feature. Unfortunately, I found that this results in significant downtime. While the bundle is created and the new instance is started. It seems that when I'm ready to make a new AMI, I should: Start up a new temporary instance. Detach the EBS volume from the production instance. Detach the IP Address from the production instance. Attach the IP Address to the temporary instance. Attach the EBS volume to the temporary instance. Create an AMI from the production instance. After the production instance restarts, reverse the attach/detach steps to put it back in production. Is this the right order of events to prevent any chance to corrupt the EBS volume? Will the EBS volume become corrupt if I detach it while a database Write is taking place? Should I snapshot the EBS volume of the production instance and attach it to the temporary instance instead? Or could taking a snapshot of the EBS volume while it's in use cause corruption? Any suggestions to improve the reliability and operations?

    Read the article

  • Erase personal data from corporate laptop

    - by microspino
    I need to delete my data from the company laptop. Nothing special just 2 or 3 folders (I have Dropbox installed on this PC) and I'd like to be sure they are gone. I read about free tools and bootable CDs to erase the entire disk, I don't need those but just a free tool to put some zeros where my data were before.

    Read the article

  • Can this way of storing typed objects be improved?

    - by Pindatjuh
    This is an "can it be improved"-question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 Windows platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical position when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignment. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 01010101 01010101 01010101 01010101 Data (user defined) Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 01010101 01010101 01010101 01010101 How can I improve this? (Both performance- and memory-consumption wise)

    Read the article

  • Help with data retrieval MACRO

    - by Andrei Ciobanu
    Hello, given the following structure: struct nmslist_elem_s { nmptr data; struct nmslist_elem_s *next; }; typedef struct nmslist_elem_s nmslist_elem; Where: typedef void* nmptr; Is it possible to write a MACRO that retrieves the data from the element and cast it to the right type: MACRO(type, element) that expands to *((type*)element->data). For example for int, i would need something like this: *((int*)(element->data)) .

    Read the article

  • Are Dynamic Prepared Statements Bad? (with php + mysqli)

    - by John
    I like the flexibility of Dynamic SQL and I like the security + improved performance of Prepared Statements. So what I really want is Dynamic Prepared Statements, which is troublesome to make because bind_param and bind_result accept "fixed" number of arguments. So I made use of an eval() statement to get around this problem. But I get the feeling this is a bad idea. Here's example code of what I mean // array of WHERE conditions $param = array('customer_id'=>1, 'qty'=>'2'); $stmt = $mysqli->stmt_init(); $types = ''; $bindParam = array(); $where = ''; $count = 0; // build the dynamic sql and param bind conditions foreach($param as $key=>$val) { $types .= 'i'; $bindParam[] = '$p'.$count.'=$param["'.$key.'"]'; $where .= "$key = ? AND "; $count++; } // prepare the query -- SELECT * FROM t1 WHERE customer_id = ? AND qty = ? $sql = "SELECT * FROM t1 WHERE ".substr($where, 0, strlen($where)-4); $stmt->prepare($sql); // assemble the bind_param command $command = '$stmt->bind_param($types, '.implode(', ', $bindParam).');'; // evaluate the command -- $stmt->bind_param($types,$p0=$param["customer_id"],$p1=$param["qty"]); eval($command); Is that last eval() statement a bad idea? I tried to avoid code injection by encapsulating values behind the variable name $param. Does anyone have an opinion or other suggestions? Are there issues I need to be aware of?

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • TransactionScope and Transactions

    - by Mike
    In my C# code I am using TransactionScope because I was told not to rely that my sql programmers will always use transactions and we are responsible and yada yada. Having said that It looks like TransactionScope object Rolls back before the SqlTransaction? Is that possible and if so what is the correct methodology for wrapping a TransactionScope in a transaction. Here is the sql test CREATE PROC ThrowError AS BEGIN TRANSACTION --SqlTransaction SELECT 1/0 IF @@ERROR<> 0 BEGIN ROLLBACK TRANSACTION --SqlTransaction RETURN -1 END ELSE BEGIN COMMIT TRANSACTION --SqlTransaction RETURN 0 END go DECLARE @RESULT INT EXEC @RESULT = ThrowError SELECT @RESULT And if I run this I get just the divide by 0 and return -1 Call from the C# code I get an extra error message Divide by zero error encountered. Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION tatement is missing. Previous count = 1, current count = 0. If I give the sql transaction a name then Cannot roll back SqlTransaction. No transaction or savepoint of that name was found. Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 2. some times it seems the count goes up, until the app completely exits The c# is just using (TransactionScope scope = new TransactionScope()) { ... Execute Sql scope.Commit() }

    Read the article

  • Use one Socket to send and recieve data

    - by volody
    What makes more sense? use one socket to send and receive data to/from a embedded hardware device use one socket to send data and separate socket to read data Communication is not very intensive but the important point is to receive data as fast as possible. On application side is used Windows XP and up.

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • Data warehousing in sql server 2008

    - by 3bd
    Hi All: I am new to data warehousing and I am a little confused plz provide some simple steps to create a cube and fill it and make querey on it to know : I have a database with the original data and I have designed the star schema and made appropriate tables I have created an analysis service project in VS 2008 and then I have made the data source -data source view-dimensions - and the cube all that based on the star schema i have created previously now: what should I do to: fill this cube make query on this cube

    Read the article

  • Perfectly reproducable select statement default ordering issue....

    - by Dave
    Hi, I've recently been chasing an issue with a client's db... solution found, but impossible to recreate. Essentially, we're doing a Select * from mytable where ArbitraryColumn = 75 Where MyTable has an Identity column, called 'MyIndentityColumn' - incremented by one in each insert. Naturally, and normally I would assume that the order returned would be the order in which they are inserted (bad assumption, but one which was forced onto me, through an inherited application - which has been patched). Essentially, I would like suggestions as to why the database, when restored to my local machine (same OS, same SQL server version - 200 sp3) same collation, and same backup instance restored on it, as a test DB on the client site. When I perform the above select, I get them in order of insert (i.e. identity column ordered ascending). On the client, it seems random (but the same 'random' order each time)... A few other points: I have the same collation on my test server as client Same DB backup restored to a test only I can access Same SQL server version and service pack Same OS Test DB is a new DB - new log and MDF... I have the problem 'solved' by adding an explicit order by clause but I want to undertand the cause of the issue, given the exact nature of my attempts to recreate it beuing futile, and perfectly recreatable on the client server... Thanks in advance, Dave

    Read the article

  • Smaller Chunks of Data

    - by Googler
    I am using a web service to fetch a large amount of data. While sending the request, i receive an error as: Error: ** Please request data in smaller chunks.** Is this a problem with the web services i am fetching the information or a normal rule for fetching the data through internet. May i know the cause of this problem and also how to send the data in smaller chunks to avoid this error.

    Read the article

  • Recover data from a Windows Dynamic Volume Spanned Disk

    - by iCe
    I have a dynamic volume created with two spanned partitions over two disk. Recently, one disk has started failing, and I want to copy the data inside that disk to another disk, before replacing it. However I don't know how to select only what is inside the failing disk, because the partitions spans across both disks. Maybe imaging the entire disk should do the work? Or I have to copy all the data from both disks? Thanks in advance!

    Read the article

  • How bad is opening and closing a SQL connection for several times? What is the exact effect?

    - by Eren
    For example, I need to fill lots of DataTables with SQLDataAdapter's Fill() method: DataAdapter1.Fill(DataTable1); DataAdapter2.Fill(DataTable2); DataAdapter3.Fill(DataTable3); DataAdapter4.Fill(DataTable4); DataAdapter5.Fill(DataTable5); .... .... Even all the dataadapter objects use the same SQLConnection, each Fill method will open and close the connection unless the connection state is already open before the method call. What I want to know is how does unnecessarily opening and closing SQLConnections affect the performance of the application. How much does it need to scale to see the bad effects of this problem (100,000s of concurrent users?). In a mid-size website (daily 50000 users) does it worth bothering and searching for all the Fill() calls, keeping them together in the code and opening the connection before any Fill() call and closing afterwards?

    Read the article

  • Hyper-V Ubuntu Networking Problems Copying Large Amounts of Data

    - by Anonymous
    I am trying to copy a large amount (about 50 GB) of data over my network from a Hyper-V-hosted virtual machine running Ubuntu 11.04 (Natty Narwhal) to another (non-virtual) Ubuntu host that I plan to use for testing upgrades to one of our web applications. The problem I am having is with the virtual machine, which I shall refer to in what follows as "source.host". This machine is running 64-bit Ubuntu Server with the 2.6.38-8-server kernel and the Microsoft Linux Integration Components for Hyper-V kernel modules (hv_utils, hv_timesource, hv_netvsc, hv_blkvsc, hv_storvsc, and hv_vmbus) loaded. It uses a Hyper-V "synthetic network adapter" for its networking interface. To do the copy, I log on to the machine with the data and run the following commands (Call the remote machine "destination.host".): $ cd /path/to/data $ tar -cvf - datafolder/ | ssh [email protected] "cat > ~/data.tar" This runs for a while and then suddenly stops after transferring somewhere from 2-6 GB. The terminal on the source.host machine displays a Write failed: broken pipe error. The odd part is this: after this occurs, the "source.host" machine is no longer able to talk to the rest of the network. I cannot ping any other hosts on the network from the "source.host" machine, and I cannot ping the "source.host" machine from any other host on the network. I am equally unable to access the any of the web services hosted on "source.host". Running ifconfig on "source.host" shows the network adapter to be up and running as usual with the correct IP address and everything. I tried restarting the networking service with $ /etc/init.d/networking restart but the problem does not go away. Restarting the machine makes it capable of talking to the network again -- it can ping and be pinged by other hosts, and the web services are also accessible and usable as normal -- but attempting the copy operation again results in the same failure, requiring another restart. As an experiment, I tried replacing the tar -- ssh pipeline above with a straight scp: $ scp -r datafolder/ [email protected]:~ but to no avail Thinking that the issue might have to do with the kernel packet-send buffers filling up, I tried increasing the buffer size to 12 MB (up from the 128 KB default) with # echo 12582911 > /proc/sys/net/core/wmem_max but this also had no effect. I'm guessing at this point that it might be a problem with the Microsoft synthetic network driver, but I don't really know. Does anyone have any suggestions? Thank you very much in advance!

    Read the article

  • how to work with datagridview if need show many columns data (approx 1Mio)

    - by ruprog
    is a problem to display data in Datagridview. A large amount of data (stock quotes) data to be displayed from left to right Tell me what to do to display an array of data in datagridviev Public dat As New List(Of act) Public Class act Public time As Date Public price As Integer End Class Sub work() Dim r As New Random For x As Integer = 0 To 1000000 Dim el As New act el.time = Now el.price = r.Next(0, 1000) dat.Add(New act) Next End Sub

    Read the article

  • Does a Postgresql dump create sequences that start with - or after - the last key?

    - by bennylope
    I recently created a SQL dump of a database behind a Django project, and after cleaning the SQL up a little bit was able to restore the DB and all of the data. The problem was the sequences were all mucked up. I tried adding a new user and generated the Python error IntegrityError: duplicate key violates unique constraint. Naturally I figured my SQL dump didn't restart the sequence. But it did: DROP SEQUENCE "auth_user_id_seq" CASCADE; CREATE SEQUENCE "auth_user_id_seq" INCREMENT 1 START 446 MAXVALUE 9223372036854775807 MINVALUE 1 CACHE 1; ALTER TABLE "auth_user_id_seq" OWNER TO "db_user"; I figured out that a repeated attempt at creating a user (or any new row in any table with existing data and such a sequence) allowed for successful object/row creation. That solved the pressing problem. But given that the last user ID in that table was 446 - the same start value in the sequence creation above - it looks like Postgresql was simply trying to start creating rows with that key. Does the SQL dump provide the wrong start key by 1? Or should I invoke some other command to start sequences after the given start ID? Keenly curious.

    Read the article

  • Laptop Backup Synch to the Data Center Without VPN

    - by Sameer
    We would like to synchronize our users or backup their laptops to the data center – looking for suggestions/alternatives to synch them to the data center where they don’t have to know about it. Blue sky like to haves: • Don’t want VPN but needs to secure • Admin can access all files • Global dedup • Select file types only – MS Office, PSTs, PDFs • Incremental change only • Right now 60 users but needs to scale (all Windows7 64 bit) • Can allocate budget if have to Don’t mean to be vague but hoping to get some proven places to start.

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • basic JSON pulling data help..

    - by Webby
    New to json data and struggling i guess the answer is real easy but been bugging me for the last hour.. Sample data { "data": { "userid": "17", "dates": { "timestame": "1275528578", }, "username": "harino54", } } Ok I can pull userid or username easy enough with echo "$t->userid" or echo "$t->username " but how do I pull data from the brackets within ? in this case timestame? cant seem to figure it out.. any ideas?

    Read the article

< Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >