Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 456/2640 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • How to save and retrieve data as key-value pairs or files in isolated storage?

    - by kaleidoscope
    One can use isolated storage to store data locally on the user's computer. There are two ways to use isolated storage. The first way is to save or retrieve data as key/value pairs by using the IsolatedStorageSettings class. The second way is to save or retrieve files by using the IsolatedStorageFile class. More details can be found at http://silverlight.net/learn/quickstarts/isolatedstorage/   Rituraj, J

    Read the article

  • How to install software that "Requires installation of untrusted packages"?

    - by user135682
    I am using Ubuntu 12.10. When I try to update the software, it shows me this error: Requires installation of untrusted packages This requires installing packages from unauthenticated sources. Details kalzium-data kanagram kate-data kde-l10n-engb kde-l10n-zhcn kde-runtime-data kdegames-data klettres-data libakonadi-kabc4 liblxc0 libwildmidi-config libwildmidi1 lxc marble-data nepomuk-core-data parley-data tomboy

    Read the article

  • Recovering data from failed Raid configuration with 4 drives and two raid sets (Asus P6T / Intel ICH10r)

    - by user56365
    I've added the complete detailed version for my question below for those who can help, but want to quickly summarize my question first. I setup two Raid arrays using (4) WD Raptors, a striped set for the OS and 1+0 set for crucial data. After booting once out of the 50 times a cable fell out, the drive wasn't recognized in the array anymore. After trying to fix it, another drive did the same. I now have two drives remaining, luckily with the parity information. I know the striped set is gone, but I need the data on the other set. Can anyone recommend anything to recover the data, or fix the two drives that doesn't allow the raid controller to recognize the drives, even though they are listed on the utility screen as still apart of the configuration but that they are not found? More Details I recently upgraded to a ASUS P6T motherboard with an Intel ICH10R raid controller and changed my previous 4 drive raid array from strictly a Raid 1+0 set to a Raid 0 for the OS/Page/Scratch drive and a Raid 1+0 set for crucial data. I never had problems after upgrading with my configuration, even when a drive died and was replaced. I managed to rebuild the array fine. Unfortunately this time around, a cable came unattached and I booted my system up until the raid status screen with the degraded error. This shouldn't have been a problem, but after I attached the drive it was no longer recognized as a member in the array. Both drives actually show up as a non-member disk. I've spent a very, very long time online trying to find information or support and haven't had much luck. After spending time trying to scan the drive for errors, damaged partition info, etc.. another drive in the set decided it didn't want to be recognized as a part of the array. At this point, I have two out of the four drives still functioning, but the Raid 1+0 array went from degraded to failed and I must find a way to retrieve that data. I think the two drives still in the array have the parity information because they show up as OS (110GB),BACKUP(80GB) and OS:1(110GB),BACKUP(80GB) under windows data management. The other two are simply 74gb Raw unallocated Is it possible recover the data using those two drive only, and which tool would I use? Could it be a simple partition table or any other error that is repairable with hard drive utilities out there? I know the Raid 0 set is done for, but I would assume because the correct drives failed in a 1+0 config to save the data I can retrieve it some how.

    Read the article

  • When does Information become Data? (i.e. Information wants to be free) [closed]

    - by James P. Wright
    I hear Programmers often talk about how Information Wants To Be Free which I mostly agree with, but the thing that people don't often pay attention to is that Information and Data are not the same thing. Should Data also be free? Does that mean all of you should have full access to my Social Security Number and other personal "information"? Where is the limit? If there is a limit, why do people throw this phrase around like it fits every circumstance (like this one)

    Read the article

  • Google ou le data warehouse mondial : Partage de connaissance ou possession du marché mondiale ?

    Google ou le data warehouse mondiale Partage de connaissance ou possession du marché mondiale ? Google a annoncé la mise en ligne de données supplémentaires nommé World Bank sur son outil public data explorer Via cet outil, vous trouvez toutes les informations mondiales concernant l'agriculture, la consommation électrique par capital, l'émission de CO2 par capital, le nombre d'utilisateurs d'internet,... Ainsi, vous avez par différents graphiques, des statistiques sur toutes les capitales dont vous pourrez comparer les différentes informations propres à ...

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • Facebook sort Presto, son moteur de requêtes open source pour le big data, qui serait dix fois plus performant que celui de Hadoop

    Facebook sort Presto, son moteur de requêtes open source pour le big data qui serait dix fois plus performant que celui de HadoopDe nombreuses entreprises comme Facebook dépendent du Big data. Dans le domaine, on compte la paire Hadoop/Hive parmi les références. Pour rappel, Hive c'est le moteur de requêtes populaire pour Hadoop. Cependant, il se pourrait que le MapReduce élément essentiel sur lequel repose Hive ne soit pas optimisé pour des situations ou la quantité de données excède un certain...

    Read the article

  • Les Bouches-du-Rhône sponsorisent des développeurs pour promouvoir l'open data : avez-vous proposé votre application ?

    Les Bouches-du-Rhône lancent une opération de sponsoring pour les développeurs d'applications et de sites web Afin de promouvoir l'open-data et la région, Developpez.com partenaire Depuis le mois d'avril, l'association Bouches-du-Rhône Tourisme a rejoint la démarche Open-data en ouvrant son portail et en libérant une centaine de jeux de données liées au tourisme dans le département. Toutes les informations détenues par l'association ont été rendues disponibles et exploitables par les développeurs, les scientifiques, les associations, les étudiants et les entreprises. Bref, par tout le monde. On y trouve par exemple...

    Read the article

  • Windows Server 2003 - Are ODBC Data Source's set per-user?

    - by Jakobud
    When I'm logged into our Windows Server 2003 server, I don't see any ODBC Data Sources, but when a different user logs in (who doesn't have Administrative rights), they have a big list of ODBC Data Sources. Are ODBC Data Sources set on a per-user basis? How come the Administrator can't see user's ODBC Data Sources?

    Read the article

  • Les Bouches-du-Rhône lancent une opération de sponsoring de développeurs pour promouvoir l'open-data et la région

    Les Bouches-du-Rhône lancent une opération de sponsoring pour les développeurs d'applications et de sites web Afin de promouvoir l'open-data et la région, Developpez.com partenaire Depuis le mois d'avril, l'association Bouches-du-Rhône Tourisme a rejoint la démarche Open-data en ouvrant son portail et en libérant une centaine de jeux de données liées au tourisme dans le département. Toutes les informations détenues par l'association ont été rendues disponibles et exploitables par les développeurs, les scientifiques, les associations, les étudiants et les entreprises. Bref, par tout le monde. On y trouve par...

    Read the article

  • how to show a large jpg image to the right in jqGrid's edit form ?

    - by cLee
    Is it possible to show a large (i.e. bigger than a thumbnail) jpeg image in the right-hand side of jqGrid's edit form ? Users want to look at a photo while entering data into fields ... they are describing things in the photo. I'm sure all things are possible with jQuery, but I don't know where to begin. thanks ... html: function afterSubmit(r, data, action) { // if session timeout returned: if (r.responseText == "logout") { window.location = '../scripts/logout.php'; } // if an error message is returned: if (r.responseText != "") { $('#submit_errors').html('Alert:'+r.responseText+''); // show div with error message $('#submit_errors').slideDown(); // hide error div after 10 seconds window.setTimeout(function() { $('#submit_errors').slideUp(); }, 10000); return false; // don't remove this! } return true; // don't remove this! } var lastsel; jQuery(document).ready(function(){ var mygrid = jQuery("#mobile_incidents").jqGrid({ url:'list.php?q=e', editurl:'edit.php', datatype: "json", // note: all column names are required even though some columns are hidden colNames:['Rec#','Date','Line','Photo'], colModel:[{ name:'id', index:'id', editable:true, editoptions: {readonly:'readonly'} }, { name:'mobile_discoveryDate', index:'mobile_discoveryDate', sortable:false, editable:true, edittype:'text', formatter:'date', formatoptions:{ srcformat:'Y/m/d', newformat:'m/d/Y' }, editoptions:{ size:12, maxlength:10, dataInit: function(element) { $(element).blur(); $(element).datepicker({dateFormat:'mm/dd/yyyy'}) } } }, { name:'mobile_lineName', index:'mobile_lineName', editable:true, sortable:false}, { name:'mobile_photo_name', index:'mobile_photo_name', editable:false, sortable:false} ], pager: '#mobile_incidents_pager', altRows: false, rowNum:10, rowList:[10,20], imgpath: '../include/images/jqgrid', viewrecords: true, emptyrecords:'No submissions found!', height: 260, sortname: 'id', sortorder: 'desc', gridview: true, scrollrows: true, autowidth: true, rownumbers: false, multiselect: false, subGrid:false, caption: '' }) .navGrid('#mobile_incidents_pager', // params: {add:false, edit:true, del:false, search:false, view:false, refresh:true, alertcap:' to edit:', alerttext:' . . . click on a row to highlight' }, // edit params: {top:50, left:5, editCaption: 'Edit Submission', bSubmit: 'Approve/Save', closeAfterEdit:true, afterSubmit:function(r,data){return afterSubmit(r,data,'edit');} }, {}, // add params {}, // delete params // search params: {multipleSearch: false}, // view params: {top: 150, left: 5, caption: 'View Mobile Rail Submission'} ); });

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • How can I 'transpose' my data using SQL and remove duplicates at the same time?

    - by Remnant
    I have the following data structure in my database: LastName FirstName CourseName John Day Pricing John Day Marketing John Day Finance Lisa Smith Marketing Lisa Smith Finance etc... The data shows employess within a business and which courses they have shown a preference to attend. The number of courses per employee will vary (i.e. as above, John has 3 courses and Lisa 2). I need to take this data from the database and pass it to a webpage view (asp.net mvc). I would like the data that comes out of my database to match the view as much as possible and want to transform the data using SQl so that it looks like the following: LastName FirstName Course1 Course2 Course3 John Day Pricing Marketing Finance Lisa Smith Marketing Finance Any thoughts on how this may be achieved? Note: one of the reasons I am trying this approach is that the original data structure does not easily lend itself to be iterated over using the typical mvc syntax: <% foreach (var item in Model.courseData) { %> Because of the duplication of names in the orignal data I would end up with lots of conditionals in my View which I would like to avoid. I have tried transforming the data using c# in my ViewModel but have found it tough going and feel that I could lighten the workload by leveraging SQL before I return the data. Thanks.

    Read the article

  • C#/.Net Error: MySql.Data Object reference not set to an instance of an object.

    - by Simon
    Hello, I get this exception on my Windows 7 64bit in application running in VS 2008 express. I am using Connector/Net 6.2.2.0: Message: Object reference not set to an instance of an object. Source: MySql.Data in MySql.Data.MySqlClient.NativeDriver.GetResult(Int32& affectedRow, Int32& insertedId) Stack trace: in MySql.Data.MySqlClient.Driver.GetResult(Int32 statementId, Int32& affectedRows, Int32& insertedId) in MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId) in MySql.Data.MySqlClient.MySqlDataReader.NextResult() in MySql.Data.MySqlClient.MySqlDataReader.Close() in MySql.Data.MySqlClient.MySqlConnection.Close() in MySql.Data.MySqlClient.MySqlConnection.Dispose(Boolean disposing) in System.ComponentModel.Component.Finalize() No inner exception. This exception is unhalted and the debugger dont point on any code line. It just say "Object reference not set to an instance of an object. MySql.Data" This error is really hard to repeat. On my Windows XP 32bit is all ok. Could it be error in 64bit Windows 7? Thank you very much for your answers. Regards, simon

    Read the article

  • PHP DomDocument class unable access domnode

    - by turbod
    Hi. I dont parse this url: http://foldmunka.net $ch = curl_init("http://foldmunka.net"); //curl_setopt($ch, CURLOPT_NOBODY, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); //curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here) $data = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); clearstatcache(); if ($data === false) { echo 'cURL failed'; exit; } $dom = new DOMDocument(); $data = mb_convert_encoding($data, 'HTML-ENTITIES', "utf-8"); $data = preg_replace('/<\!\-\-\[if(.*)\]>/', '', $data); $data = str_replace('<![endif]-->', '', $data); $data = str_replace('<!--', '', $data); $data = str_replace('-->', '', $data); $data = preg_replace('@<script[^>]*?>.*?</script>@si', '', $data); $data = preg_replace('@<style[^>]*?>.*?</style>@si', '', $data); $data = mb_convert_encoding($data, 'HTML-ENTITIES', "utf-8"); @$dom->loadHTML($data); $els = $dom->getElementsByTagName('*'); foreach($els as $el){ print $el->nodeName." | ".$el->getAttribute('content')."<hr />"; if($el->getAttribute('title'))$el->nodeValue = $el->getAttribute('title')." ".$el->nodeValue; if($el->getAttribute('alt'))$el->nodeValue = $el->getAttribute('alt')." ".$el->nodeValue; print $el->nodeName." | ".$el->nodeValue."<hr />"; } I need the alt, title attributes and the simple text, but this page i cannot access the nodes within the body tag.

    Read the article

  • 1 oracle schema support large reques per day , is this safe ?

    - by Hlex
    I 'm java system designer. As we have large project to do tightly, Those projects are java api without webpage. I design to create general flow engine to support all project. This idea use 1 oracle schema , having general transaction table . And others control routing table. They all nearly complete. But DBA Team concern that he is suffered to maintain very large request to 1 schema. 1 reason is if there are problem is some table. He must offline tablespace to fix. This is problem because all project will be affected. I try to convince by split data of each table to partition by project_code & "month number to delete" . Eaxmple partition: PROJ1_05 PROJ1_06 PROJ1_07 PROJ2_05 PROJ2_06 PROJ2_07 and all transaction table will store on its partition. So, If there are problem on any part of tablespace then he should offline some partition and another project with use same table should able to service Transaction per day should around 10Meg Record per day. Is this a good idea? If I must use 1 schema, what is strategy to do? Do you have any comment?

    Read the article

  • How can I randomly iterate through a large Range?

    - by void
    I would like to randomly iterate through a range. Each value will be visited only once and all values will eventually be visited. For example: class Array def shuffle ret = dup j = length i = 0 while j > 1 r = i + rand(j) ret[i], ret[r] = ret[r], ret[i] i += 1 j -= 1 end ret end end (0..9).to_a.shuffle.each{|x| f(x)} where f(x) is some function that operates on each value. A Fisher-Yates shuffle is used to efficiently provide random ordering. My problem is that shuffle needs to operate on an array, which is not cool because I am working with astronomically large numbers. Ruby will quickly consume a large amount of RAM trying to create a monstrous array. Imagine replacing (0..9) with (0..99**99). This is also why the following code will not work: tried = {} # store previous attempts bigint = 99**99 bigint.times { x = rand(bigint) redo if tried[x] tried[x] = true f(x) # some function } This code is very naive and quickly runs out of memory as tried obtains more entries. What sort of algorithm can accomplish what I am trying to do? [Edit1]: Why do I want to do this? I'm trying to exhaust the search space of a hash algorithm for a N-length input string looking for partial collisions. Each number I generate is equivalent to a unique input string, entropy and all. Basically, I'm "counting" using a custom alphabet. [Edit2]: This means that f(x) in the above examples is a method that generates a hash and compares it to a constant, target hash for partial collisions. I do not need to store the value of x after I call f(x) so memory should remain constant over time. [Edit3/4/5/6]: Further clarification/fixes.

    Read the article

  • How large a role does subjectiveness play in programming?

    - by Bob
    I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP. Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing. What role does subjectiveness play within the programming realm? Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting. The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance. But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought. It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work. Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?

    Read the article

  • Trying to calculate large numbers in Python with gmpy. Python keeps crashing?

    - by Ryan Peschel
    I was recommended to use gmpy to assist with calculating large numbers efficiently. Before I was just using python and my script ran for a day or two and then ran out of memory (not sure how that happened because my program's memory usage should basically be constant throughout.. maybe a memory leak?) Anyways, I keep getting this weird error after running my program for a couple seconds: mp_allocate< 545275904->545275904 > Fatal Python error: mp_allocate failure This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Also, python crashes and Windows 7 gives me the generic python.exe has stopped working dialog. This wasn't happening with using standard python integers. Now that I switch to gmpy I am getting this error just seconds in to running my script. I thought gmpy was specialized in dealing with large number arithmetic? For reference, here is a sample program that produces the error: import gmpy2 p = gmpy2.xmpz(3000000000) s = gmpy2.xmpz(2) M = s**p for x in range(p): s = (s * s) % M I have 10 gigs of RAM and without gmpy this script ran for days without running out of memory (still not sure how that happened considering s never really gets larger.. Anyone have any ideas? EDIT: Forgot to mention I am using Python 3.2

    Read the article

  • jQuery AJAX: How to pass large HTML tags as parameters?

    - by marknt15
    Hello, How can I pass a large HTML tag data to my PHP using jQuery AJAX? When I'm receiving the result it is wrong. Thanks in advance. Cheers, Mark jQuery AJAX code: $('#saveButton').click(function() { // do AJAX and store tree structure to a PHP array (to be saved later in database) var treeInnerHTML = $("#demo_1").html(); alert(treeInnerHTML); var ajax_url = 'ajax_process.php'; var params = 'tree_contents=' + treeInnerHTML; $.ajax({ type: 'POST', url: ajax_url, data: params, success: function(data) { $("#show_tree").html(data); }, error: function(req, status, error) { } }); }); treeInnerHTML actual value: <ul class="ltr"> <li id="phtml_1" class="open"><a href="#"><ins>&nbsp;</ins>Root node 1</a> <ul> <li class="leaf" id="phtml_2"><a href="#"><ins>&nbsp;</ins>Child node 1</a></li> <li class="last leaf" id="phtml_3"><a href="#"><ins>&nbsp;</ins>Child node 2</a></li> </ul> </li> <li id="phtml_5" class="file last leaf"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> Returned result from my show_tree div: <ul class="\&quot;ltr\&quot;"> <li id="\&quot;phtml_1\&quot;" class="\&quot;open\&quot;"><a href="%5C%22#%5C%22"><ins></ins></a></li></ul>

    Read the article

  • How can I store large amount of data from a database to XML (memory problem)?

    - by Andrija
    First, I had a problem with getting the data from the Database, it took too much memory and failed. I've set -Xmx1500M and I'm using scrolling ResultSet so that was taken care of. Now I need to make an XML from the data, but I can't put it in one file. At the moment, I'm doing it like this: while(rs.next()){ i++; xmlStringBuilder.append("\n\t<row>"); xmlStringBuilder.append("\n\t\t<ID>" + Util.transformToHTML(rs.getInt("id")) + "</ID>"); xmlStringBuilder.append("\n\t\t<JED_ID>" + Util.transformToHTML(rs.getInt("jed_id")) + "</JED_ID>"); xmlStringBuilder.append("\n\t\t<IME_PJ>" + Util.transformToHTML(rs.getString("ime_pj")) + "</IME_PJ>"); //etc. xmlStringBuilder.append("\n\t</row>"); if (i%100000 == 0){ //stores the data to a file with the name i.xml storeKBR(xmlStringBuilder.toString(),i); xmlStringBuilder= null; xmlStringBuilder= new StringBuilder(); } and it works; I get 12 100 MB files. Now, what I'd like to do is to do is have all that data in one file (which I then compress) but if just remove the if part, I go out of memory. I thought about trying to write to a file, closing it, then opening, but that wouldn't get me much since I'd have to load the file to memory when I open it. P.S. If there's a better way to release the Builder, do let me know :)

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >