Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 83/2640 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • iPhone Core Data - Access deep attributes with to many relationships

    - by ncohen
    Hi everyone, Let say I have an entity user which has a one to many relationship with the entity menu which has a one to many relationship with the entity meal which has a many to one relationship with the entity recipe which has a one to many relationship with the entity element. What I would like to do is to select the elements which belong to a particular user (username = myUsername) and particular menu*s* (minDate < menu.date < maxDate). Does anyone have an idea how to get them? Thanks

    Read the article

  • Approach to data wrapping

    - by Mikhail
    I'm developing in PHP and MySQL. The information about the currently logged in user is stored in many different tables. The information that I need on each page, I preload. However if something is needed from a rarely accessed table - then I do $newdata = $db->Query('SELECT * FROM rare_table WHERE user_id='.$user->id); I would like to simplify the above to a point where I don't have to specify that the query should be limited to this particular user. An ideal function call would be: $newdata = $user->Query('SELECT * FROM rare_table'); Obviously I'd have to parse the SQL and add a WHERE clause. Or add to the already existing clause. Questions: are there tools to do this? How can I develop this? Is this even a good idea?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Connecting data to a GUI - OOP

    - by tau
    I have an application with several graphs and tables on it. I worked fast and just made classes like Graph and Table that each contained a request object (pseudo-code): class Graph { private request; public function setDateRange(dateRange) { request.setDateRange(dateRange); } public function refresh() { request.getData(function() { //refresh the display }); } } Upon a GUI event (say, someone changes the date range dropdown), I'd just call the setters on the Graph instance and then refresh it. Well, when I added other GUI elements like tables and whatnot, they all basically had similar methods (setDateRange and other things common to the request). What are some more elegant OOP ways of doing this? The application is very simple and I don't want to over-architect it, but I also don't want to have a bunch of classes with basically the same methods that are just routing to a request object. I also don't want to set up each GUI class as inheriting from the request class, but I'm open to any ideas really.

    Read the article

  • Extending existing data structure in Scala.

    - by Lukasz Lew
    I have a normal tree defined in Scala. sealed abstract class Tree case class Node (...) extends Tree case class Leaf (...) extends Tree Now I want to add a member variable to all nodes and leaves in the tree. Is it possible with extend keyword or do I have to modify the tree classes by adding [T]?

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive [closed]

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • get data from online once and then viewable offline

    - by user313100
    Okay, I want to have an app that takes phone numbers from an online database and displays them in a table view. When the user is not online, I want them to still be able to see the numbers they already got from the database in the table view. If the user manages to go back online, the database updates the view. My question is, is this possible to do and if so, what's the best way to approach it? (bit of a newbie, please help me out)

    Read the article

  • Pushing Large Files to 500+ Computers [closed]

    - by WMIF
    I work with a team to manage 500-600 rented Windows 7 computers for an annual conference. We have a large amount of data that needs to be synced to these computers, up to 1 TiB. The computers are divided into rooms and connected through unmanaged gigabit switches. We prepare these computers ahead of time with the Windows installation and configuration, plus any files that we have available to us before we send the base image in for replication by the rental company. Every year, we have presenters approach on site with up to gigs of data that need to be pushed to the room that they will be presenting in. Sometimes they only have a few files that are small sizes, such as a slide PDF, but can sometimes be much larger 5 GiB. Our current strategy for pushing these files is using batch scripts and RoboCopy. For the large pushes, we actually use a BitTorrent client to generate a torrent file, and then we use the batch-RoboCopy to push the torrent into a folder on the remote machines that is being monitored by an installed BT client. Often times, this data needs to be pushed immediately with a small time window. We have several machines in a control room that are identical to the machines on the floor that we use for these pushes. We occasionally have a need to execute a program on the remote machines, and we currently use batch and PSexec to handle this task. We would love to be able to respond to these last minute pushes with "sorry, your own fault", but it won't happen. The BT method has allowed us to have a much faster response time, but the whole batch process can get messy when there are multiple jobs being pushed. We use Enterprise Ghost for other processes, and it doesn't work well in this large of scale, plus it is really quite expensive for a once-a-year task like this. EDIT: There is a hard requirement that the remote machines on the floor are running Windows. The control machines do not have a hard OS requirement. I would really like to stay away from Multicast because of complications with upstream routers. Is Multicast or BitTorrent the better way to go on this? Is there another protocol that might work better?

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

  • Errors with large data sources

    - by The Sheek Geek
    I'm doing some benchmarking on large data sources and binding/exporting data for reporting. I started with using a data set, filling it with 100000 rows and then attempting to open a crystal report with the retrieved data. I noticed that the data set filled just fine (took about 779 milliseconds) however, when attempting to export the data to the report or even bind to a gridview the application would fail with an OutOfMemoryException. Does anyone experienced this before or have an idea of how to get around it? It is very possible that clients will run reports for years worth of data and 100000 rows are not inconceivable. The application and the benchmark code are written in C# using ORACLE and SQL Server databases. I still have some data sources to test, but would like to know how to get around this just in case I don't find a better solution.

    Read the article

  • C++ union data-structure, easy acccess of bits within a DWORD

    - by TK
    Im running through a set of DirectX tutorials online and I have the following structure: struct CUSTOMVERTEX { FLOAT x, y, z, rhw; // from the D3DFVF_XYZRHW flag DWORD color; // from the D3DFVF_DIFFUSE flag } My basic understanding of directX leads me to thing tha color is made up of 8-bit alpha, red, green and blue channels. I am attempting to get east access to these channels. Rather than write the following code numerous times (within the CUSTOMVERTEX structure): public: int red() { return (color & 0x00FF0000) >> 16; } I could write a more elegant somution with a combination of a union and a structure e.g. struct CUSTOMVERTEX { FLOAT x, y, z, rhw; // from the D3DFVF_XYZRHW flag #pragma pack(2) union { DWORD color; // from the D3DFVF_DIFFUSE flag struct { char a; char r; char g; char b; }; }; } However this does not appear to function as expected, the values in r, g, & b almost appear the reverse of whats in color e.g. if color is 0x12345678 a = 0x78, r = 0x56. Is this an endieness issue? Also what other problems could I be expecting from this solution? e.g. overflow from the color members? I guess what Im asking is ... is there a better way to do this?!

    Read the article

  • Validation error language in asp.NET dynamic data WEB-site

    - by loviji
    Hello, I need to change language of validation error to another language. Validation logic must not be changed. I just want to translate The field f5080eb8_0a83_4b89_b339_233528441711 must be a valid integer. into another language. For example from English into Italian. I search in my project for this text, result=0. So, what is the best way to translate validation error text into another language? Some ideas, thanks!

    Read the article

  • mysql - check if data exists across multiple tables

    - by Dd Daym
    I am currently running this query inside MySQL to check if the specified values exists within the table associated with them. SELECT COUNT(artist.artist_id), COUNT(album.album_id), COUNT(tracks.track_id) FROM artist, album, tracks WHERE artist.artist_id = 320295 OR album.album_id = 1234 OR tracks.track_id = 809 The result I get from running this query is all 1, meaning that all the statements after the WHERE clause is true. To further check the query's reliability, I changed the tracks.track_ = 809 to 802, which I know does not match. However the results displayed are still all 1, meaning that they were all successfully matched even when I purposefully inserted a value which would not have matched. How do I get it to show 1 for a match and 0 for no matches within the same query? EDIT: I have inserted an image of the query running

    Read the article

  • ado.net data entity problem

    - by ognjenb
    I have this error Cannot implicitly convert type 'System.Linq.IQueryable' to 'Mvc.Models.engineer'. An explicit conversion exists (are you missing a cast?) after write this code engineer Ing = new engineer(); Ing = from j in testPersons.ibekoengineer select j.Name; What is wrong?

    Read the article

  • Databases: Migrate data between MS Access DB and MYSQL

    - by Dean
    Hello, I have 2 databases, one is MS Access DB from an old website, and the other one is MYSQL from the new Joomla+VirtueMart based website. I need to migrate existing products from MS Access to MYSQL. I thought of putting both on server and writing SQL queries in MYSQL workbench, untill I have a good script for that, but I'm very new to SQL, so I'd rather avoid that. I there a better way and more efficient for that?

    Read the article

  • Strategies on synching data and caching data between iphone and server

    - by Blankman
    Say I have a TODO list iphone app, that can be edited/viewed from both a web application and the iphone application. When on the iphone, when a user views all his todo lists, or sub-items, I would think that each time the user views a particular list it shouldn't be hitting the web applications API every-time, but rather cache locally the values and only hit the web when things change. What strategies are there for this type of scenerio?

    Read the article

  • How does dropbox version/upload large files?

    - by barfoon
    Hey everyone, I have a free dropbox account (2GB), and I was wondering how the versioning of large files works. I have a full backup of all my webfiles that sites @ just over 1GB. After the initial upload of 1GB, everytime it syncs will dropbox figure out the delta of the file, or will it have to upload the entire thing again to version it? It would be cool to always have an up to date version of a large file, but I dont want to kill my bandwidth uploading 1GB everytime. Is this possible? Thanks,

    Read the article

  • where to store temporary data in MVC 2.0 project

    - by StuffHappens
    Hello! I'm starting to learn MVC 2.0 and I'm trying to create a site with a quiz: user is asked a question and given several options of answer. If he chooses the right answer he gets some points, if he doesn't, he looses them. I tried to do this the following way public class HomeController : Controller { private ITaskGenerator taskGenerator = new TaskGenerator(); private string correctAnswer; public ActionResult Index() { var task = taskGenerator .GenerateTask(); ViewData["Task"] = task.Task; ViewData["Options"] = task.Options; correctAnswer= task.CorrectAnswer; return View(); } public ActionResult Answer(string id) { if (id == correctAnswer) return View("Correct") return View("Incorrect"); } } But I have a problem: when user answers the cotroller class is recreated and I loose correct answer. So what is the best place to store correct answer? Should I create a static class for this purpose? Thanks for your help!

    Read the article

  • iPhone - Create not persistant entities in core data

    - by ncohen
    Hi everyone, I would like to use entity objects but not store them... I read that I could create them like this: myElement = (Element *)[NSEntityDescription insertNewObjectForEntityForName:@"Element" inManagedObjectContext:managedObjectContext]; And right after that remove them: [managedObjectContext deleteObject:myElement]; then I can use my elements: myElement.property1 = @"Hello"; This works pretty well even though I think this is probably not the most optimal way to do it... Then I try to use it in my UITableView... the problem is that the object get released after the initialization. My table becomes empty when I move it! Thanks

    Read the article

  • ado.net-data-services filer using composite

    - by Thurein
    Hi, I am having a problem filter a query. I have Contact and Tag entities. Actually in the database, they are 3 different tables, Contacts, Tags and ContactTag table. I would like to filter contacts using the Tag name. I was trying this filter but it did not work. http://localhost:50143/ContactDataService.svc/Contacts?$filter=Tags/TagName eq 'Tag1' Am I missing any thing ? Thanks Thurein

    Read the article

  • Best way to copy large amount of data between partitions

    - by skinp
    I'm looking to transfer data across 2 lv of an HP-UX server. I have a couple of those transfers to do, some of which are mostly binary (Oracle tablespace...) and some others are more text files (logs...). Used data size of the volumes is between 100Gb and 1Tb. Also, I will be changing the block size from 1K to 8K on some of these partitions... Things I'm looking for: Guarantees data integrity Fastest data transfer speed Keeps file ownership and permissions Right now, I've thought about dd, cp and rsync, but I'm not sure on the best one to use and the best way to use them...

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >