Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 472/3184 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Linq to Sql, Repositories, and Asp.Net MVC ViewData: How to remove redundancy?

    - by Dr. Zim
    Linq to SQL creates objects which are IQueryable and full of relations. Html Helpers require specific interface objects like IEnumerable<SelectListItem>. What I think could happen: Reuse the objects from Linq to SQL without all the baggage, i.e., return Pocos from the Linq to SQL objects without additional Domain Model classes? Extract objects that easily convert to (or are) Html helper objects like the SelectListItem enumeration? Is there any way to do this without breaking separation of concerns? Some neat oop trick to bridge the needs? For example, if this were within a repository, the SelectListItem wouldn't be there. The select new is a nice way to cut out an object from the Linq to SQL without the baggage but it's still referencing a class that shouldn't be referenced: IEnumerable<SelectListItem> result = (from record in db.table select new SelectListItem { Selected = record.selected, Text= record.Text, Value= record.Value } ).AsEnumerable();

    Read the article

  • SSIS 2008 - How to read from SQL Server Compact Edition file?

    - by Gustavo Cavalcanti
    I can see "SQL Server Compact Destination" under Data Flow Destinations, but I am looking for its source counterpart. If I choose ADO.Net source and create a new connection, there's no provider for SQL CE. What am I missing? Thanks! Update: I am able to create a "Data Source" (under "Data Sources" folder in my SSIS project") that connects to an existing Sql CE file. But how can I use this Data Source in my data flow?

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • Dump Linq-To-Sql now that Entity Framework 4.0 has been released?

    - by DanM
    The relative simplicity of Linq-To-Sql as well as all the criticism leveled at version 1 of Entity Framework (especially, the vote of no confidence) convinced me to go with Linq-To-Sql "for the time being". Now that EF 4.0 is out, I wonder if it's time to start migrating over to it. Questions: What are the pros and cons of EF 4.0 relative to Linq-To-Sql? Is EF 4.0 finally ready for prime time? Is now the time to switch over?

    Read the article

  • Why .data() function of jQuery is better to prevent memory leaks?

    - by burak ozdogan
    Hi, Regarding to jQuery utility function jQuery.data() the online documentation says: "The jQuery.data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. " Why to use: document.body.foo = 52; can result a memory leak -or in what conditions- so that I should use jQuery.data(document.body, 'foo', 52); Should I ALWAYS prefer .data() instead of using expandos in any case? (I would appreciate if you can provide an example to compare the differences) Thanks, burak ozdogan

    Read the article

  • File copying utility like rsync with error handling like ddrescue, for data recovery from a hard drive with bad sectors or hardware failure

    - by purefusion
    I have a hard drive with either bad blocks or sectors that are failing to read due to potential mechanical issues, such as a bad disk head, bad motor, or some other issue that is causing the hard drive to read data excruciatingly slowly and with lots of read errors. I'm seeing an average of 50 KB/sec, with some reads dropping below 10 KB/sec, and frequently it gets stuck on a file or sector altogether, usually for quite a long time—from 2-10 minutes or more (when using rsync, before it times out). Speed seems to vary wildly, and it gets stuck on files a lot, and when it finally gets "unstuck" it only seems to last for a short burst before it gets stuck again. The drive is also very quiet with only an occasional sound of files copying (usually when it gets stuck/unstuck for a brief time, before getting stuck again). Thus, there are none of those evil sounds that are normally associated with HDD death. Someone suggested that the problems sounded like they might be caused by a misaligned disk head, which requires a lot of re-reads before it finally reads data with success. Sounds plausible, but I digress... Anyway, the problem with rsync is that it seems to have no decent error handling support. Obviously, it wasn't meant for use in recovering data from failing hard drives, but all the so-called "data recovery" utilities out there that are meant for such use usually focus on recovery of deleted files or messed up partitions, rather than copying files off dying hard drives. Deleted file recovery is not what I need, obviously, so perhaps you can understand my disappointment in not being able to find what I'm after yet. Naturally, this is where you'd probably say "You should use ddrescue!" Well, that's all fine and dandy, but I've already got most of the data backed up, so I just want to recover certain files. I'm not concerned with trying to recover a full partition block-by-block as ddrescue does. I am only interested in rescuing just specific files and directories. Ideally, what I'd like is some sort of cross between rsync and ddrescue: something that lets me specify source and destination as directories of normal files like rsync (rather than two full partitions as ddrescue requires), with a way to skip files with errors in an initial run, and then allows me to attempt recovery of those files with errors in a later run (with a slightly altered command, of course), perhaps even offering an option to specify the number of retry attempts ...just like how ddrescue works with blocks, only I want a utility that works with specific files/directories like rsync does. So am I daydreaming here, or does something out there exist that can do this? Or, maybe even a way to make rsync or ddrescue work in such a way? I'm really open to whatever solutions might work, so long as they let me choose which files I want to "rescue", and can skip files with errors in the initial run, and try/retry those errors again later. So far I've tried rsync with the following options, but it often gets stuck on a file for longer than the timeout, and ideally I'd just like it to move on to the next file and come back later to the files it gets stuck on. I don't think that's possible though. Anyway, here's what I've been using up till now: rsync -avP --stats --block-size=512 --timeout=600 /path/to/source/* /path/to/destination/

    Read the article

  • What is the best way to configure a SQL Server for 50 developers?

    - by Lakhlani Prashant
    Hi, If I am running an organization that has 50 .net developers and all are using SQL Server, what is the best way to make a single SQL Server available to them? Here is some of the concerns that I want to be careful about Should I configure database users per project or per user? or both? Should I provide single SQL Server instance? There are some more concerns but I think getting answer of these two will be a good starting point.

    Read the article

  • How can I sum number of rows of a table in SQL ORACLE?

    - by Christian Magro
    Dear community I have the follow instruction on SQL ORACLE: SELECT COUNT(FE_DAY) FROM TI_DATE WHERE (FE_MO = 02 AND FE_YEAR in ('2011', '2012')) GROUP BY FE_DAY In this instruction effectively obtain like SQL Answer a list of number that they sum=10 .But How can I don't show rows .I need only show the row's sum=10 I tried to apply the follow SQL instruction also: SELECT SUM(COUNT(FE_DAY)) FROM TI_DATE WHERE (FE_MO = 02 AND FE_YEAR in ('2011', '2012')) GROUP BY FE_DAY But the answer was 57, different of the number of rows, that I need.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • how to check if internal storage file has any data

    - by user3720291
    public class Save extends Activity { int levels = 2; int data_block = 1024; //char[] data = new char[] {'0', '0'}; String blankval = "0"; String targetval = "0"; String temp; String tempwrite; String string = "null"; TextView tex1; TextView tex2; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.save); Intent intent = getIntent(); Bundle b = intent.getExtras(); tex1 = (TextView) findViewById(R.id.textView1); tex2 = (TextView) findViewById(R.id.textView2); if(b!=null) { string =(String) b.get("string"); } loadprev(); save(); } public void save() { if (string.equals("Blank")) blankval = "1"; if (string.equals("Target")) targetval = "1"; temp = blankval + targetval; try { FileOutputStream fos = openFileOutput("data.gds", MODE_PRIVATE); fos.write(temp.getBytes()); fos.close(); } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} tex1.setText(blankval); tex2.setText(targetval); } public void loadprev() { String final_data = ""; try { FileInputStream fis = openFileInput("data.gds"); InputStreamReader isr = new InputStreamReader(fis); char[] data = new char[data_block]; int size; while((size = isr.read(data))>0) { String read_data = String.copyValueOf(data, 0, size); final_data += read_data; data = new char[data_block]; } } catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} char[] tempread = final_data.toCharArray();; blankval = "" + tempread[0]; targetval = "" + tempread[1]; } } After much tinkering i have finally managed to get my save/load function to work, but it does have an error, pretty much i got it to work then i did a fresh reintall deleting data.gds, afterwards the save/load function crashes because the data.gds file has no previous values. can i use a if statment to check if data.gds has any values in it, if so how do i do it and if not, then what could i use instead?

    Read the article

  • How to select the nth row in a SQL database table?

    - by Charles Roper
    I'm interested in learning some (ideally) database agnostic ways of selecting the nth row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases: SQL Server MySQL PostgreSQL SQLite Oracle I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches: WITH Ordered AS ( SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDate FROM Orders) SELECT * FROM Ordered WHERE RowNumber = 1000000 Credit for the above SQL: Firoz Ansari's Weblog Update: See Troels Arvin's answer regarding the SQL standard. Troels, have you got any links we can cite?

    Read the article

  • What sql server isolation level should I choose to prevent concurrent reads?

    - by Brian Bolton
    I have the following transaction: SQL inserts a 1 new record into a table called tbl_document SQL deletes all records matching a criteria in another table called tbl_attachment SQL inserts multiple records into the tbl_attachment Until this transaction finishes, I don't want others users to be aware of the (1) new records in tbl_document, (2) deleted records in tbl_attachment, and (3) modified records in tbl_attachment. Would Read Committed Isolation be the correct isolation level?

    Read the article

  • Uploading a csv file to sql server - Identity problem.

    - by Doozer1979
    Given a column structure in a CSV file of: First_Name, Last_Name, Date_Of_Birth And a SQL Server table with a structure of ID(PK) | First_Name | Last_Name | Date_Of_Birth (Field ID is an Identity with an auto-increment of 1) How do i arrange it so that SQL Server does not attempt to insert the First_Name column from the csv file into the ID field? For info the csv is loaded into a DataTable and then copied to SQL Server using SqlBulkCopy Should i be modifying the csv file before the import add the ID column (The destination table is truncated prior to import, so no need to worry about duplicate key values.) Or perhaps adding an id column to the Datatable? Or Is there a setting in Sql Server that i may have missed?

    Read the article

  • How to make Emacs sql-mode recognize MySQL #-style comments?

    - by Ken
    I'm reading a bunch of MySQL files that use # (to end-of-line) comments, but my sql-mode doesn't support them. I found the syntax-table part of sql.el that defines /**/ and -- comments, but according to this, Emacs syntax tables support only 2 comment styles. Is there a way to add support for # comments in sql.el easily?

    Read the article

  • Best Ruby ORM for Wrapping around Legacy SQL Server Database?

    - by Technocrat
    Hi. I found this answer and it sounds like almost exactly what I'm doing. I have heard mixed answers about whether or not datamapper can support SQL Server through dataobjects. Basically, we have an app that uses a consistently structured database, consistently named tables, etc in SQL Server. We're making all kinds of tools and stuff that have to interact with it, some of them remotely and so I decided that we need to create some common, simple access point to do read/write operations on the SQL Server app since it's API is all C# and other things I despise. Now my question is if anyone has any examples or projects they know of where a ruby ORM can essentially create models for another application's legacy database by defining the conventions of each model's pkeys, fkeys, table names, etc. Sequel is the only ORM I've used with SQL Server but never to do anything quite like this. Any suggestions?

    Read the article

  • Using mcrypt to pass data across a webservice is failing

    - by adam
    Hi I'm writing an error handler script which encrypts the error data (file, line, error, message etc) and passes the serialized array as a POST variable (using curl) to a script which then logs the error in a central db. I've tested my encrypt/decrypt functions in a single file and the data is encrypted and decrypted fine: define('KEY', 'abc'); define('CYPHER', 'blowfish'); define('MODE', 'cfb'); function encrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); mcrypt_generic_init($td, KEY, $iv); $crypttext = mcrypt_generic($td, $data); mcrypt_generic_deinit($td); return $iv.$crypttext; } function decrypt($data) { $td = mcrypt_module_open(CYPHER, '', MODE, ''); $ivsize = mcrypt_enc_get_iv_size($td); $iv = substr($data, 0, $ivsize); $data = substr($data, $ivsize); if ($iv) { mcrypt_generic_init($td, KEY, $iv); $data = mdecrypt_generic($td, $data); } return $data; } echo "<pre>"; $data = md5(''); echo "Data: $data\n"; $e = encrypt($data); echo "Encrypted: $e\n"; $d = decrypt($e); echo "Decrypted: $d\n"; Output: Data: d41d8cd98f00b204e9800998ecf8427e Encrypted: ê÷#¯KžViiÖŠŒÆÜ,ÑFÕUW£´Œt?†÷>c×åóéè+„N Decrypted: d41d8cd98f00b204e9800998ecf8427e The problem is, when I put the encrypt function in my transmit file (tx.php) and the decrypt in my recieve file (rx.php), the data is not fully decrypted (both files have the same set of constants for key, cypher and mode). Data before passing: a:4:{s:3:"err";i:1024;s:3:"msg";s:4:"Oops";s:4:"file";s:46:"/Applications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Data decrypted: Mª4:{s:3:"err";i:1024@7OYªç`^;g";s:4:"Oops";s:4:"file";sôÔ8F•Ópplications/MAMP/htdocs/projects/txrx/tx.php";s:4:"line";i:80;} Note the random characters in the middle. My curl is fairly simple: $ch = curl_init($url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, 'data=' . $data); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $output = curl_exec($ch); Things I suspect could be causing this: Encoding of the curl request Something to do with mcrypt padding missing bytes I've been staring at it too long and have missed something really really obvious If I turn off the crypt functions (so the transfer tx-rx is unencrypted) the data is received fine. Any and all help much appreciated! Thanks, Adam

    Read the article

  • What can map database tables like LINQ to SQL did?

    - by trnTash
    A good thing in LINQ to SQL was a fast and reliable way to map database tables and convert them into classes accessible from c# project. However it is no longer recommended to create projects using LINQ to SQL. What is its substitute? What kind of tool should I use in VS 2010 today if I want to have the same functionality as I had with LINQ to SQL?

    Read the article

  • .Net 4.0 Memory-Mapped Files verses RDMS Storage

    - by Harry
    I'm interested in people's thoughts comparing storing data in a traditional SQL based Database or utilising a Memory-Mapped File such as the one in the new .Net 4.0 runtime. The data in question would be arrays of simple structures. Obvious pros and cons: SQL Database Pros Adhoc query support SQL Management Tools Schema changes (adding more columns and setting default values) Memory-Mapped Pros Lighter overhead? (this is an assumption on my part) Shareable between process threads Any others? Is it worth it for performance gains?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >