Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 83/2640 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Syncing Large Directories/Filesystems using USB Drive

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • jQuery: Sorting hierarchical data?

    - by Industrial
    Hi everybody, I have tried for some time to work out a way of sorting nested categories with jQuery. I failed to build my own plugin to do this, so I tried to find something that were available already. Tried a few hours now with this one, http://www.jordivila.net/code/js/jquery/ui-widgetTreeList_inheritance/widgetTreeListSample.aspx and cant get it to work. What are the alternatives of creating a jQuery / jQuery UI script that handles sorting children and parent categories in a way that can be combined with a AJAX PHP backend to handle the actual sorting in the database? Thanks!

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive [closed]

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • Approach to data wrapping

    - by Mikhail
    I'm developing in PHP and MySQL. The information about the currently logged in user is stored in many different tables. The information that I need on each page, I preload. However if something is needed from a rarely accessed table - then I do $newdata = $db->Query('SELECT * FROM rare_table WHERE user_id='.$user->id); I would like to simplify the above to a point where I don't have to specify that the query should be limited to this particular user. An ideal function call would be: $newdata = $user->Query('SELECT * FROM rare_table'); Obviously I'd have to parse the SQL and add a WHERE clause. Or add to the already existing clause. Questions: are there tools to do this? How can I develop this? Is this even a good idea?

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • iPhone Core Data - Access deep attributes with to many relationships

    - by ncohen
    Hi everyone, Let say I have an entity user which has a one to many relationship with the entity menu which has a one to many relationship with the entity meal which has a many to one relationship with the entity recipe which has a one to many relationship with the entity element. What I would like to do is to select the elements which belong to a particular user (username = myUsername) and particular menu*s* (minDate < menu.date < maxDate). Does anyone have an idea how to get them? Thanks

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • mysql - check if data exists across multiple tables

    - by Dd Daym
    I am currently running this query inside MySQL to check if the specified values exists within the table associated with them. SELECT COUNT(artist.artist_id), COUNT(album.album_id), COUNT(tracks.track_id) FROM artist, album, tracks WHERE artist.artist_id = 320295 OR album.album_id = 1234 OR tracks.track_id = 809 The result I get from running this query is all 1, meaning that all the statements after the WHERE clause is true. To further check the query's reliability, I changed the tracks.track_ = 809 to 802, which I know does not match. However the results displayed are still all 1, meaning that they were all successfully matched even when I purposefully inserted a value which would not have matched. How do I get it to show 1 for a match and 0 for no matches within the same query? EDIT: I have inserted an image of the query running

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • get data from online once and then viewable offline

    - by user313100
    Okay, I want to have an app that takes phone numbers from an online database and displays them in a table view. When the user is not online, I want them to still be able to see the numbers they already got from the database in the table view. If the user manages to go back online, the database updates the view. My question is, is this possible to do and if so, what's the best way to approach it? (bit of a newbie, please help me out)

    Read the article

  • Validation error language in asp.NET dynamic data WEB-site

    - by loviji
    Hello, I need to change language of validation error to another language. Validation logic must not be changed. I just want to translate The field f5080eb8_0a83_4b89_b339_233528441711 must be a valid integer. into another language. For example from English into Italian. I search in my project for this text, result=0. So, what is the best way to translate validation error text into another language? Some ideas, thanks!

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Strategies on synching data and caching data between iphone and server

    - by Blankman
    Say I have a TODO list iphone app, that can be edited/viewed from both a web application and the iphone application. When on the iphone, when a user views all his todo lists, or sub-items, I would think that each time the user views a particular list it shouldn't be hitting the web applications API every-time, but rather cache locally the values and only hit the web when things change. What strategies are there for this type of scenerio?

    Read the article

  • Databases: Migrate data between MS Access DB and MYSQL

    - by Dean
    Hello, I have 2 databases, one is MS Access DB from an old website, and the other one is MYSQL from the new Joomla+VirtueMart based website. I need to migrate existing products from MS Access to MYSQL. I thought of putting both on server and writing SQL queries in MYSQL workbench, untill I have a good script for that, but I'm very new to SQL, so I'd rather avoid that. I there a better way and more efficient for that?

    Read the article

  • What type of data store should I use for my ios app?

    - by mwiederrecht
    I am pretty new to ios and using servers so forgive me. I am building an ios app for research. I need to monitor things that the user does and then push it up to a server for analysis (yes, with user and IRB permission). On the client's side I need to keep quite a bit of data that won't really change except in the case of pulling an updated version from the server, and then a minimal amount of user-specific data. Most of the data I will collect needs to be pushed to a server for analysis and then can be deleted from the client side. I am struggling to figure out what kind of data store I need to use, especially since I am not quite sure how the pushing and pulling from the server process works yet. Does it make sense to use Core Data? XML? SQLite? I like the Core Data idea, but I am not sure what kind of problems I will run into when I need to send large amounts of data to it and from it from the server. I imagine I might need to send data in a different form than it is probably stored in on either end - so what kind of overhead am I likely to run into in the process of converting that data? Is there a good format to save stuff in that would work well for me on both ends AND for sending the data? As you can probably tell, I could use some advice. Thanks!

    Read the article

  • How does dropbox version/upload large files?

    - by barfoon
    Hey everyone, I have a free dropbox account (2GB), and I was wondering how the versioning of large files works. I have a full backup of all my webfiles that sites @ just over 1GB. After the initial upload of 1GB, everytime it syncs will dropbox figure out the delta of the file, or will it have to upload the entire thing again to version it? It would be cool to always have an up to date version of a large file, but I dont want to kill my bandwidth uploading 1GB everytime. Is this possible? Thanks,

    Read the article

  • C++ union data-structure, easy acccess of bits within a DWORD

    - by TK
    Im running through a set of DirectX tutorials online and I have the following structure: struct CUSTOMVERTEX { FLOAT x, y, z, rhw; // from the D3DFVF_XYZRHW flag DWORD color; // from the D3DFVF_DIFFUSE flag } My basic understanding of directX leads me to thing tha color is made up of 8-bit alpha, red, green and blue channels. I am attempting to get east access to these channels. Rather than write the following code numerous times (within the CUSTOMVERTEX structure): public: int red() { return (color & 0x00FF0000) >> 16; } I could write a more elegant somution with a combination of a union and a structure e.g. struct CUSTOMVERTEX { FLOAT x, y, z, rhw; // from the D3DFVF_XYZRHW flag #pragma pack(2) union { DWORD color; // from the D3DFVF_DIFFUSE flag struct { char a; char r; char g; char b; }; }; } However this does not appear to function as expected, the values in r, g, & b almost appear the reverse of whats in color e.g. if color is 0x12345678 a = 0x78, r = 0x56. Is this an endieness issue? Also what other problems could I be expecting from this solution? e.g. overflow from the color members? I guess what Im asking is ... is there a better way to do this?!

    Read the article

  • ado.net data entity problem

    - by ognjenb
    I have this error Cannot implicitly convert type 'System.Linq.IQueryable' to 'Mvc.Models.engineer'. An explicit conversion exists (are you missing a cast?) after write this code engineer Ing = new engineer(); Ing = from j in testPersons.ibekoengineer select j.Name; What is wrong?

    Read the article

  • Accessing large log files on a unix machine with textpad

    - by Jason
    Hi, I'm interested to access large log files on a unix server with textpad. (textpad for history reasons, i personally prefer ofcourse less awk grep etc) but I have many personal who rather be using textpad they have years of experience with it and can tweak it to do whatever they want. The problem is that if i connect for example with winscp to get the log files to textpad it first fetches the full log and user needs to wait and it bloats etc. I would rather the textpad to somehow access the unix machine and get only the relevant segment of the log file (large log files could be GB) anyone knows how can this be achieved?

    Read the article

  • iPhone - Create not persistant entities in core data

    - by ncohen
    Hi everyone, I would like to use entity objects but not store them... I read that I could create them like this: myElement = (Element *)[NSEntityDescription insertNewObjectForEntityForName:@"Element" inManagedObjectContext:managedObjectContext]; And right after that remove them: [managedObjectContext deleteObject:myElement]; then I can use my elements: myElement.property1 = @"Hello"; This works pretty well even though I think this is probably not the most optimal way to do it... Then I try to use it in my UITableView... the problem is that the object get released after the initialization. My table becomes empty when I move it! Thanks

    Read the article

  • Data Structure / Hash Function to link Sets of Ints to Value

    - by Gaminic
    Given n integer id's, I wish to link all possible sets of up to k id's to a constant value. What I'm looking for is a way to translate sets (e.g. {1, 5}, {1, 3, 5} and {1, 2, 3, 4, 5, 6, 7}) to unique values. Guarantees: n < 100 and k < 10 (again: set sizes will range in [1, k]). The order of id's doesn't matter: {1, 5} == {5, 1}. All combinations are possible, but some may be excluded. All sets and values are constant and made only once. No deletes or inserts, no value updates. Once generated, the only operations taking place will be look-ups. Look-ups will be frequent and one-directional (given set, look up value). There is no need to sort (or otherwise organize) the values. Additionally, it would be nice (but not obligatory) if "neighboring" sets (drop one id, add one id, swap one id, etc) are easy to reach, as well as "all sets that include at least this set". Any ideas?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >