Search Results

Search found 59969 results on 2399 pages for 'data dictionary'.

Page 27/2399 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Is a python dictionary the best data structure to solve this problem?

    - by mikip
    Hi I have a number of processes running which are controlled by remote clients. A tcp server controls access to these processes, only one client per process. The processes are given an id number in the range of 0 - n-1. Were 'n' is the number of processes. I use a dictionary to map this id to the client sockets file descriptor. On startup I populate the dictionary with the ids as keys and socket fd of 'None' for the values, i.e no clients and all pocesses are available When a client connects, I map the id to the sockets fd. When a client disconnects I set the value for this id to None, i.e. process is available. So everytime a client connects I have to check each entry in the dictionary for a process which has a socket fd entry of None. If there are then the client is allowed to connect. This solution does not seem very elegant, are there other data structures which would be more suitable for solving this? Thanks

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • Pulling Data out of an object in Javascript

    - by PerryCS
    I am having a problem retreiving data out of an object passed back from PHP. I've tried many different ways to access this data and none work. In Firebug I see the following... (it looks nicer in Firebug) - I tried to make this look as close to Firebug as possible results Object { data="{"formName":"form3","formData":"data goes here"}", phpLiveDebug="<...s: 198.91.215.227"} data "{"formName":"form3","formData":"data goes here"}" phpLiveDebug "<...s: 198.91.215.227" I can access phpLiveDebug no problem, but the data portion is an object. I have tried the following... success: function(results) { //$("#formName").val(results.data.formName); //$("#formName").val(results.data[0].formName); //$("#formName").val(results.data[0]); //$("#formName").val(results.data[1]); //$("#formName").val(results.data[0]["formName"]); var tmp = results.data[formName]; alert("!" + tmp + "!"); $("#formName").val(tmp); $("#jqueryPHPDebug").val(results.phpLiveDebug); } This line works in the example above... $("#jqueryPHPDebug").val(results.phpLiveDebug); but... I can't figure out how to get at the data inside the results.data portion... as you can see above, I have been trying different things and more not even listed there. I was really hoping this line would work :) var tmp = results.data[formName]; But it doesn't. So, after many days of reading, tinkering, my solution was to re-write it to return data similar to the phpLiveDebug but then I thought... it's gotta be something simple I'm overlooking... Thank you for your time. Please try and explain why my logic (my horrible attempts at trying to figure out the proper method) above is wrong if you can?

    Read the article

  • How to convert JavaScript dictionary into Python syntax

    - by Sputnix
    Writing out javascript dictionary from inside of JavaScript- enabled application (such as Adobe) into external .jsx file (or any other .txt file) the context of resulted file dictionary looks like: ({one:"1", two:"2"}) (Please note that each dictionary keys are written as they are the variables name (which is not true). A next step is to read this .jsx file with Python. I need to find a way to convert ({one:"1", two:"2"}) into Python dictionary syntax such as: {'one':"1", 'two':"2"} It has been already suggested that instead of using JavaScript's built-in dict.toSource() it would make more sense to use JSON which would write a dictionary content in similar to Python syntax. But unfortunately using JSON is not an option for me. I need to find a way to convert ({one:"1", two:"2"}) into {'one':"1", 'two':"2"} using Python alone. Any suggestions on how to achieve it? Once again, the problem mostly in dictionary keys syntax which inside of Python look like variable names instead of strings-like dictionary keys names: one vs "one"

    Read the article

  • Accessing items from a dictionary using pickle efficiently in Python

    - by user248237
    I have a large dictionary mapping keys (which are strings) to objects. I pickled this large dictionary and at certain times I want to pull out only a handful of entries from it. The dictionary has usually thousands of entries total. When I load the dictionary using pickle, as follows: from cPickle import * # my dictionary from pickle, containing thousands of entries mydict = open(load('mypickle.pickle')) # accessing only handful of entries here for entry in relevant_entries: # find relevant entry value = mydict[entry] I notice that it can take up to 3-4 seconds to load the entire pickle, which I don't need, since I access only a tiny subset of the dictionary entries later on (shown above.) How can I make it so pickle only loads those entries that I have from the dictionary, to make this faster? Thanks.

    Read the article

  • Dictionary as parameter, where the Value-Type is irrelevant

    - by aaginor
    Hi folks, I have a function, that returns the next higher value of a Dictionary-Keys-List compared to a given value. If we have a Key-List of {1, 4, 10, 24} and a given value of 8, the function would return 10. Obviously the type of the Value-Part of the Dictionary doesn't matter for the function, the function-code for a Dictionary<int, int> and Dictionary<int, myClass> would be the same. How has the method-head have to look like, when I want to call the function with any Dictionary, that has int as key-Type and the value-Type is irrelevant? I tried: private int GetClosedKey(Dictionary<int, object> list, int theValue); but it says that there are illegal arguments, when I call it with a Dictionary. I don't want to copy'n'paste the function for each different value-type that my function may be called. Any idea, how to accomplish that? Thanks in advance, Frank

    Read the article

  • POSTing JSON data to WCF REST

    - by Randall Sexton
    I'm trying to send data from a client application using jQuery to a REST WCF service based on the WCF REST starter kit. Here's what I have so far. Service Definition: [WebHelp(Comment = "Save PropertyValues to the database")] [WebInvoke(Method = "POST", UriTemplate = "PropertyValues_Save", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] public bool PropertyValues_Save(Guid assetId, Dictionary<Guid, string> newValues) { ... } Call from the client: $.ajax({ url:SVC_PROPERTYVALUES_SAVE, type: "POST", contentType: "application/json; charset=utf-8", data: jsonData, dataType: "json", error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus + ' ' + errorThrown); }, success: function(data) { if (data) { alert('Values saved'); $("#confirmSubmit").dialog('close'); } else { alert('Values failed to save'); $("#confirmSubmit").dialog('close'); } } }); Example of the JSON being passed: { "assetId": "d70714c3-e403-4cc5-b8a9-9713d05b2ee0", "newValues": [ { "key": "bd01aa88-b48d-47c7-8d3f-eadf47a46680", "value": "0e9fdf34-2d12-4639-8d70-19b88e753ab1" }, { "key": "06e8eda2-a004-450e-90ab-64df357013cf", "value": "1d490aec-f40e-47d5-865c-07fe9624f955" } ] } I'm using Windows Authentication on the virtual directory. When I call operations that are GETs, everything is fine. This code is prompting the browser to log in. When I enter my credentials, I simply get an alert in my browser which says "error undefined". Even if you can't help my specific error, do you see anything that looks wrong from glancing? I've been beating my head on this nearly all day. Thanks in advance.

    Read the article

  • Cannot Add Particular Word to Dictionary

    - by WCWedin
    I am trying to add a particular word to my custom dictionary using Word 2007. (The word happens to be "deserialized".) When I right-click on the word and click Add to Dictionary, the red underline does not go away. When I use the Spelling & Grammar tool from the Review tab on the ribbon, it will stop on that word; clicking the Add to Dictionary button has no effect. Oddly, I am able to add other words to the custom dictionary without a problem. I recently added "deserializes", for instance. I have only encountered this problem with that one particular word. Does anyone know what might be wrong and how I might fix it? Clarifications My document and all its content is set to English (United States). My custom dictionary is set to apply to All Languages, which is the default value. "Serialize" is in the US English default dictionary, but "deserialize" and its various forms is not.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Python: Data Object or class

    - by arg20
    I enjoy all the python libraries for scraping websites and I am experimenting with BeautifulSoup and IMDB just for fun. As I come from Java, I have some Java-practices incorporated into my programming styles. I am trying to get the info of a certain movie, I can either create a Movie class or just use a dictionary with keys for the attributes. My question is, should I just use dictionaries when a class will only contain data and perhaps almost no behaviour? In other languages creating a type will help you enforce certain restrictions and because of type checks the IDE will help you program, this is not always the case in python, so what should I do? Should I resort to creating a class only when there's both, behaviour and data? Or create a movie class even though it'll probably be just a data container? This all depends on your model, in this particular case either one is fine but I'm wondering about what's a good practice.

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • How to remove all that country-specific dictionaries (like En_AU, En_CA, de_CH, etc)?

    - by Ivan
    After I've installed some language packs and spell checking dictionaries (I'd like to use with Firefox and OpenOffice) I've got tons of language variations installed. This makes very inconvenient to maintain dictionary additions, for example. Sometimes Firefox decides to switch to Australian, sometimes to UK dictionary, sometimes to US, etc. For me, a Russian, English is just English, and German is just German. I think every English-speaking will understand me, may I write "color" or "colour", "dialog", or "dialogue" (I usually prefer classic UK spelling though, as a matter of a habit (as I was taught at school)). How to remove all those dialects?

    Read the article

  • Data access pattern

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • Data access pattern, combining push and pull?

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • Accessing a dictionary value by custom object value in Python?

    - by Sam
    So I have a square that's made up of a series of points. At every point there is a corresponding value. What I want to do is build a dictionary like this: class Point: def __init__(self, x, y): self._x = x self._y = y square = {} for x in range(0, 5): for y in range(0, 5): point = Point(x,y) square[point] = None However, if I later create a new point object and try to access the value of the dictionary with the key of that point it doesn't work.. square[Point(2,2)] Traceback (most recent call last): File "<pyshell#19>", line 1, in <module> square[Point(2,2)] KeyError: <__main__.Point instance at 0x02E6C378> I'm guessing that this is because python doesn't consider two objects with the same properties to be the same object? Is there any way around this? Thanks

    Read the article

  • Import csv data (SDK iphone)

    - by Ni
    I am new to cocoa. I have been working on these stuff for a few days. For the following code, i can read all the data in the string, and successfully get the data for plot. NSMutableArray *contentArray = [NSMutableArray array]; NSString *filePath = @"995,995,995,995,995,995,995,995,1000,997,995,994,992,993,992,989,988,987,990,993,989"; NSArray *myText = [filePath componentsSeparatedByString:@","]; NSInteger idx; for (idx = 0; idx < myText.count; idx++) { NSString *data =[myText objectAtIndex:idx]; NSLog(@"%@", data); id x = [NSNumber numberWithFloat:0+idx*0.002777778]; id y = [NSDecimalNumber decimalNumberWithString:data]; [contentArray addObject: [NSMutableDictionary dictionaryWithObjectsAndKeys:x, @"x", y, @"y", nil]]; } self.dataForPlot = contentArray; then, i try to load the data from csv file. the data in Data.csv file has the same value and the same format as 995,995,995,995,995,995,995,995,1000,997,995,994,992,993,992,989,988,987,990,993,989. I run the code, it is supposed to give the same graph output. however, it seems that the data is not loaded from csv file successfully. i can not figure out what's wrong with my code. NSMutableArray *contentArray = [NSMutableArray array]; NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Data" ofType:@"csv"]; NSString *Data = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:nil ]; if (Data) { NSArray *myText = [Data componentsSeparatedByString:@","]; NSInteger idx; for (idx = 0; idx < myText.count; idx++) { NSString *data =[myText objectAtIndex:idx]; NSLog(@"%@", data); id x = [NSNumber numberWithFloat:0+idx*0.002777778]; id y = [NSDecimalNumber decimalNumberWithString:data]; [contentArray addObject: [NSMutableDictionary dictionaryWithObjectsAndKeys:x, @"x", y, @"y",nil]]; } self.dataForPlot = contentArray; } The only difference is NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Data" ofType:@"csv"]; NSString *Data = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:nil ]; if (data){ } did i do anything wrong here?? Thanks for your help!!!!

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

  • Oracle Big Data Learning Library - Click on LEARN BY PRODUCT to Open Page

    - by chberger
    Oracle Big Data Learning Library... Learn about Oracle Big Data, Data Science, Learning Analytics, Oracle NoSQL Database, and more! Oracle Big Data Essentials Attend this Oracle University Course! Using Oracle NoSQL Database Attend this Oracle University class! Oracle and Big Data on OTN See the latest resource on OTN. Search Welcome Get Started Learn by Role Learn by Product Latest Additions Additional Resources Oracle Big Data Appliance Oracle Big Data and Data Science Basics Meeting the Challenge of Big Data Oracle Big Data Tutorial Video Series Oracle MoviePlex - a Big Data End-to-End Series of Demonstrations Oracle Big Data Overview Oracle Big Data Essentials Data Mining Oracle NoSQL Database Tutorial Videos Oracle NoSQL Database Tutorial Series Oracle NoSQL Database Release 2 New Features Using Oracle NoSQL Database Exalytics Enterprise Manager 12c R3: Manage Exalytics Setting Up and Running Summary Advisor on an E s Oracle R Enterprise Oracle R Enterprise Tutorial Series Oracle Big Data Connectors Integrate All Your Data with Oracle Big Data Connectors Using Oracle Direct Connector for HDFS to Read the Data from HDSF Using Oracle R Connector for Hadoop to Analyze Data Oracle NoSQL Database Oracle NoSQL Database Tutorial Videos Oracle NoSQL Database Tutorial Series Oracle NoSQL Database Release 2 New Features  Using Oracle NoSQL Database eries Oracle Business Intelligence Enterprise Edition Oracle Business Intelligence Oracle BI 11g R1: Create Analyses and Dashboards - 4 day class Oracle BI Publisher 11g R1: Fundamentals - 3 day class Oracle BI 11g R1: Build Repositories - 5 day class

    Read the article

  • Let's introduce the Oracle Enterprise Data Quality family!

    - by Sarah Zanchetti
    The Oracle Enterprise Data Quality family of products helps you to achieve maximum value from their business applications by delivering fit-­for-­purpose data. OEDQ is a state-of-the-art collaborative data quality profiling, analysis, parsing, standardization, matching and merging product, designed to help you understand, improve, protect and govern the quality of the information your business uses, all from a single integrated environment. Oracle Enterprise Data Quality products are: Oracle Enterprise Data Quality Profile and Audit Oracle Enterprise Data Quality Parsing and Standardization Oracle Enterprise Data Quality Match and Merge Oracle Enterprise Data Quality Address Verification Server Oracle Enterprise Data Quality Product Data Parsing and Standardization Oracle Enterprise Data Quality Product Data Match and Merge Also, the following are some of the key features of OEDQ: Integrated data profiling, auditing, cleansing and matching Browser-based client access Ability to handle all types of data – for example customer, product, asset, financial, operational Connection to any JDBC-compliant data sources and targets Multi-user project support (role-based access, issue tracking, process annotation, and version control) Services Oriented Architecture (SOA) - support for designing processes that may be exposed to external applications as a service Designed to process large data volumes A single repository to hold data along with gathered statistics and project tracking information, with shared access Intuitive graphical user interface designed to help you solve real-world information quality issues quickly Easy, data-led creation and extension of validation and transformation rules Fully extensible architecture allowing the insertion of any required custom processing  If you need to learn more about EDQ, or get assistance for any kind of issue, the Oracle Technology Network offers a huge range of resources on Oracle software. Discuss technical problems and solutions on the Discussion Forums. Get hands-on step-by-step tutorials with Oracle By Example. Download Sample Code. Get the latest news and information on any Oracle product. You can also get further help and information with Oracle software from: My Oracle Support Oracle Support Services An Information Center is available, where you can find technical information and fast solutions to the most common already solved issues: Information Center: Oracle Enterprise Data Quality [ID 1555073.2]

    Read the article

  • Why is a .net generic dictionary so big

    - by thefroatgt
    I am serializing a generic dictionary in VB.net and I am very surprised that it is about 1.3kb with a single item. Am I doing something wrong, or is there something else I should be doing? I have a large number of dictionaries and it is killing me to send them all across the wire. The code I use for serialization is Dim dictionary As New Dictionary(Of Integer, Integer) Dim stream As New MemoryStream Dim bformatter As New BinaryFormatter() dictionary.Add(1, 1) bformatter.Serialize(stream, dictionary) Dim len As Long = stream.Length

    Read the article

  • Oracle Enterprise Data Quality - Geared Up and Ready for OpenWorld 2012

    - by Mala Narasimharajan
    10 days and counting till Oracle OpenWorld 2012 is upon us.  Enterprise data quality is key to every information integration and consolidation initiative. At this year's OpenWorld, hear how Oracle Enterprise Data Quality provides the critical piece to achieving trusted, reliable master data and increases the value of data integration initiatives. Here are the different ways you can learn and experience Enterprise Data Quality at OpenWorld:  Conference sessions: Oracle Enterprise Data Quality: Product Overview and Roadmap - Monday 10/1/12, 1:45-2:45 PM - Moscone West - 3006 Data Preparation and Ongoing Governance with the Oracle Enterprise Data Quality Platform - Wednesday 10/3/2012, 1:15-2:15 PM - Moscone West - 3000  Data Acquisition, Migration and Integration with the Oracle Enterprise Data Quality Platform - Thursday 10/4/2012, 12:45-1:45 PM - Moscone West - 3005  Hands on Labs: Introduction to Oracle Enterprise Data Quality Platform -  Monday 10/2/2012, 4:45-5:45 PM - Marriot Marquis - Salon 1/2 Demos:  Trusted Data with Oracle Enterprise Data Quality - Moscone South, Right - S-243 (note: proceed to Middleware Demo grounds) For a list of Master Data Management and Data Quality sessions and other events click here. 

    Read the article

  • Software for a online collaborative bi/tri lingual dictionary [closed]

    - by user537488
    I am looking for a software which I can host in popular and general shared web hosting services(online softwares like wordpress, meidawiki, drupal etc.) which can do the following- allow users to create account allow users or anons to add words to the dictionary (there will be English as base language and other languages) easier way to import all the words from English dictionary users should be able to write the that language equivalent of the English word Every word should have it's own address and page like www.namesomething.com/word/en/software will contain the word software and the other language word for it search should be faster and should find nearer results it's should be able to list related words like if the user is looking at "software" then other words from s like "softcopy" etc should appear alphabetically in that page Any one should be able to comment on the word which is not seen in the main page but other page similar to the talk page in the wiki any one should be able to contribute clean interface unlike wiki (media wiki and all other) just for words only I tried media wiki and other wiki software but it overloaded and unclean. I am looking for interface similar to oed.com but clean, minimal as we are not going to have such more information. Just words in English and it's other language equivalent. Here we are talking about a language which has not yet been in the Internet. It's should be collaborative.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >