Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 298/869 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Why is my Map broken?

    - by Kirk
    Scenario: Creating a server which has Room objects which contain User objects. I want to store the rooms in a Map of some sort by Id (a string). Desired Behavior: When a user makes a request via the server, I should be able to look up the Room by id from the library and then add the user to the room, if that's what the request needs. Currently I use the static function in my Library.java class where the Map is stored to retrieve Rooms: public class Library { private static Hashtable<String, Rooms> myRooms = new Hashtable<String, Rooms>(); public static addRoom(String s, Room r) { myRooms.put(s, r); } public static Room getRoomById(String s) { return myRooms.get(s); } } In another class I'll do the equivalent of myRoom.addUser(user); What I'm observing using Hashtable, is that no matter how many times I add a user to the Room returned by getRoomById, the user is not in the room later. I thought that in Java, the object that was returned was essentially a reference to the data, the same object that was in the Hashtable with the same references; but, it isn't behaving like that. Is there a way to get this behavior? Maybe with a wrapper of some sort? Am I just using the wrong variant of map? Help?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • How to use boost::bind with non-copyable params, for example boost::promise ?

    - by zhengxi
    Some C++ objects have no copy constructor, but have move constructor. For example, boost::promise. How can I bind those objects using their move constructors ? #include <boost/thread.hpp> void fullfil_1(boost::promise<int>& prom, int x) { prom.set_value(x); } boost::function<void()> get_functor() { // boost::promise is not copyable, but movable boost::promise<int> pi; // compilation error boost::function<void()> f_set_one = boost::bind(&fullfil_1, pi, 1); // compilation error as well boost::function<void()> f_set_one = boost::bind(&fullfil_1, std::move(pi), 1); // PS. I know, it is possible to bind a pointer to the object instead of // the object itself. But it is weird solution, in this case I will have // to take cake about lifetime of the object instead of delegating that to // boost::bind (by moving object into boost::function object) // // weird: pi will be destroyed on leaving the scope boost::function<void()> f_set_one = boost::bind(&fullfil_1, boost::ref(pi), 1); return f_set_one; }

    Read the article

  • What is the correct way to handle object which is instance of class in apache.xerces?

    - by Roman
    Preface: I'm working on docx parser for java. docx format is based on xml. When I read document its parts are being unmarshalled (with JAXB). And I get a tree of certain elements based on xml markup. Almost problem: But some elements (which are at very deep xml level) returned not as certain class from docx spec (i.e. CTStyle, CTDrawing, CTInline etc) but as Object. Those objects are indeed instances of xerces classes, e.g. ElementNSImpl. Problem: How should I handle these objects. The simplest approach is: CTGraphicData gData = getGraphicData (); Object obj = gData.getAny().get(0); ElementNSImpl element = (ElementNSImpl)obj; But it doesn't seem to be a good solution. I've never worked with xerces directly. What is the better way to do this casting? (If anyone also give me a tip about right way to iterate through nodes it would be great).

    Read the article

  • Problem with flash in a webbrowser in a winform

    - by fgnt
    I have the oddest problem (but aren't all programming problems odd?). I have a winform that contains a webbrowser object that opens a website that has flash on it. This winform is running on a touchscreen computer (I can't find the brand or model number). Here is what I know: flash objects embeded in a website that is accessed via the webbrowser object in my winform do not function properly said flash objects only react to the first 'click' on them. So the website opens and if I hit a button, that button works but nothing afterward works within the flash object works. If my first 'click' misses a button, nothing works there after. trying to 'click' an flash button gives the same response as just hovering over the button This isn't a problem with the touch part of the touch screen as using a mouse also gives the same not working right response this isn't a problem with the web page as I can open up explorer on the same computer and navigate the webpage just fine from there The program also works 100% right on my personal computer so it shouldn't be the program's fault if it's not the touch screen fault and not the program's fault, I can't blame anything right now. the EXACT same program worked 100% on our old touch screen (which was having other problems so we had to get rid of it). Oh, also, surfing just a 'normal' webpage in a webbrowser in the winform works just fine.

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Opengl Iphone SDK: How to tell if you're touching an object on screen?

    - by TheGambler
    First is my touchesBegan function and then the struct that stores the values for my object. I have an array of these objects and I'm trying to figure out when I touch the screen if I'm touching an object on the screen. I don't know if I need to do this by iterating through all my objects and figure out if I'm touching an object that way or maybe there is an easier more efficient way. How is this usually handled? -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ [super touchesEnded:touches withEvent:event]; UITouch* touch = ([touches count] == 1 ? [touches anyObject] : nil); CGRect bounds = [self bounds]; CGPoint location = [touch locationInView:self]; location.y = bounds.size.height - location.y; float xTouched = location.x/20 - 8 + ((int)location.x % 20)/20; float yTouched = location.y/20 - 12 + ((int)location.y % 20)/20; } typedef struct object_tag // Create A Structure Called Object { int tex; // Integer Used To Select Our Texture float x; // X Position float y; // Y Position float z; // Z Position float yi; // Y Increase Speed (Fall Speed) float spinz; // Z Axis Spin float spinzi; // Z Axis Spin Speed float flap; // Flapping Triangles :) float fi; // Flap Direction (Increase Value) } object;

    Read the article

  • Create dynamic factory method in PHP (< 5.3)

    - by fireeyedboy
    How would one typically create a dynamic factory method in PHP? By dynamic factory method, I mean a factory method that will autodiscover what objects there are to create, based on some aspect of the given argument. Preferably without registering them first with the factory either. I'm OK with having the possible objects be placed in one common place (a directory) though. I want to avoid your typical switch statement in the factory method, such as this: public static function factory( $someObject ) { $className = get_class( $someObject ); switch( $className ) { case 'Foo': return new FooRelatedObject(); break; case 'Bar': return new BarRelatedObject(); break; // etc... } } My specific case deals with the factory creating a voting repository based on the item to vote for. The items all implement a Voteable interface. Something like this: Default_User implements Voteable ... Default_Comment implements Voteable ... Default_Event implements Voteable ... Default_VoteRepositoryFactory { public static function factory( Voteable $item ) { // autodiscover what type of repository this item needs // for instance, Default_User needs a Default_VoteRepository_User // etc... return new Default_VoteRepository_OfSomeType(); } } I want to be able to drop in new Voteable items and Vote repositories for these items, without touching the implementation of the factory.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Sort and limit queryset by comment count and date using queryset.extra() (django)

    - by thornomad
    I am trying to sort/narrow a queryset of objects based on the number of comments each object has as well as by the timeframe during which the comments were posted. Am using a queryset.extra() method (using django_comments which utilizes generic foreign keys). I got the idea for using queryset.extra() (and the code) from here. This is a follow-up question to my initial question yesterday (which shows I am making some progress). Current Code: What I have so far works in that it will sort by the number of comments; however, I want to extend the functionality and also be able to pass a time frame argument (eg, 7 days) and return an ordered list of the most commented posts in that time frame. Here is what my view looks like with the basic functionality in tact: import datetime from django.contrib.comments.models import Comment from django.contrib.contenttypes.models import ContentType from django.db.models import Count, Sum from django.views.generic.list_detail import object_list def custom_object_list(request, queryset, *args, **kwargs): '''Extending the list_detail.object_list to allow some sorting. Example: http://example.com/video?sort_by=comments&days=7 Would get a list of the videos sorted by most comments in the last seven days. ''' try: # this is where I started working on the date business ... days = int(request.GET.get('days', None)) period = datetime.datetime.utcnow() - datetime.timedelta(days=int(days)) except (ValueError, TypeError): days = None period = None sort_by = request.GET.get('sort_by', None) ctype = ContentType.objects.get_for_model(queryset.model) if sort_by == 'comments': queryset = queryset.extra(select={ 'count' : """ SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s """ % ( ctype.pk, queryset.model._meta.db_table, queryset.model._meta.pk.name ), }, order_by=['-count']).order_by('-count', '-created') return object_list(request, queryset, *args, **kwargs) What I've Tried: I am not well versed in SQL but I did try just to add another WHERE criteria by hand to see if I could make some progress: SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s AND submit_date='2010-05-01 12:00:00' But that didn't do anything except mess around with my sort order. Any ideas on how I can add this extra layer of functionality? Thanks for any help or insight.

    Read the article

  • Django m2m form appearing fields

    - by dana
    I have a classroom application,and a follow relation. Users can follow each other and can create classrooms.When a user creates a classroom, he can invite only the people that are following him. The Classroom model is a m2m to User table. i have in models. py: class Classroom(models.Model): creator = models.ForeignKey(User) classname = models.CharField(max_length=140, unique = True) date = models.DateTimeField(auto_now=True) open_class = models.BooleanField(default=True) members = models.ManyToManyField(User,related_name="list of invited members") and in models.py of the follow application: class Relations(models.Model): initiated_by = models.ForeignKey(User, editable=False) date_initiated = models.DateTimeField(auto_now=True, editable = False) follow = models.ForeignKey(User, editable = False, related_name = "follow") date_follow = models.DateTimeField(auto_now=True, editable = False) and in views.py of the classroom app: def save_classroom(request, username): if request.method == 'POST': u = User.objects.get(username=username) form = ClassroomForm(request.POST, request.FILES) if form.is_valid(): new_obj = form.save(commit=False) new_obj.creator = request.user r = Relations.objects.filter(initiated_by = request.user) # new_obj.members = new_obj.save() return HttpResponseRedirect('.') else: form = ClassroomForm() return render_to_response('classroom/classroom_form.html', { 'form': form, }, context_instance=RequestContext(request)) i'm using a ModelForm for the classroom form, and the default view, taking in consideration my many to many relation with User table, in the field Members, is a list of all Users in my database. But i only want in that list the users that are in a follow relationship with the logged in user - the one who creates the classroom. How can i do that? Thanks!

    Read the article

  • Which linear programming package should I use for high numbers of constraints and "warm starts"

    - by davidsd
    I have a "continuous" linear programming problem that involves maximizing a linear function over a curved convex space. In typical LP problems, the convex space is a polytope, but in this case the convex space is piecewise curved -- that is, it has faces, edges, and vertices, but the edges aren't straight and the faces aren't flat. Instead of being specified by a finite number of linear inequalities, I have a continuously infinite number. I'm currently dealing with this by approximating the surface by a polytope, which means discretizing the continuously infinite constraints into a very large finite number of constraints. I'm also in the situation where I'd like to know how the answer changes under small perturbations to the underlying problem. Thus, I'd like to be able to supply an initial condition to the solver based on a nearby solution. I believe this capability is called a "warm start." Can someone help me distinguish between the various LP packages out there? I'm not so concerned with user-friendliness as speed (for large numbers of constraints), high-precision arithmetic, and warm starts. Thanks!

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • Why is my html getting mal-formed?

    - by alkaloids
    I have the following table in a form that's embedded in a formtastic form (semantic_form_for). Everything I ask to be generated by ruby shows up, but the table gets badly mangled (essentially, the tags NEVER get formed. The table headers get drawn correctly There are 14 available_date objects that get passed, and they alternate between having a time value of 1 or 2, so this is just terribly boggling, but probably simple to fix... <table class="availability_table"> <tr> <th>Date</th> <th>Early</th> <th>Late</th> </tr> <% f.fields_for :available_dates do |ad| %> <% if ad.object.time == 1 #if this is an early shift, then start the new row %> <tr><td><%= ad.object.date.strftime('%a, %b %d, %Y') %></td> <td><%= ad.collection_select(:availability , LookupAvailability.all.collect, :id, :name) %></td> <% else #otherwise end the row with just a box%> <td><%= ad.collection_select(:availability , LookupAvailability.all.collect, :id, :name) %></td></tr> <% end %> <% end %> </table> So like I said, the form functions properly, and the objects all get updated and displayed correctly and all that, it's just that the HTML isn't getting echo'd out properly so my table is all mangled. Help!

    Read the article

  • Using WCF to expose underlying process

    - by Steven
    I think I must be a little dull because I'm having so much difficulty with this. I use WCF for pretty much everything in-house, it's the most appropriate technology. I have a new Silverlight 3 app that is connecting to the WCF service and that's working fine. Where the problem begins is: Because of the expense in creating the objects within this service and the high correlation of individual objects being shared between clients I want to have a console application that basically gathers/calculates/caches all the data for the service 24/7 and the service basically connects to the console app (or whatever it is) and gets the pre-processed data. eg, think of it in terms of a stock reporting app (which it is). Person A has a portfolio of x, y z Person B has a portfolio of x, q, z, r The service needs to provide updated metrics on how their portfolio is performing. So instead of every 1 second processing person A, then person B, the app independently gathers the stock price and persons position information into memory and the service just queries the in memory result. Thanks for your help, I really am feeling dumb right now.

    Read the article

  • New NSData with range of old NSData maintaining bytes.

    - by umop
    I have a fairly large NSData (or NSMutableData if necessary) object which I want to take a small chunk out of and leave the rest. Since I'm working with large amounts of NSData bytes, I don't want to make a big copy, but instead just truncate the existing bytes. Basically: NSData *source: < a few bytes I want to discard + < big chunk of bytes I want to keep NSData *destination: < big chunk of bytes I want to keep There are truncation methods in NSMutableData, but they only truncate the end of it, whereas I want to truncate the beginning. My thoughts are to do this with the methods: - getBytes:range: and - initWithBytesNoCopy:length:freeWhenDone: However, I'm trying to figure out how to manage memory with these. I'm guessing the process will be like this (I've placed ????s where I don't know what to do): void *buffer // Get range of bytes [source getBytes:buffer range:NSMakeRange(myStart, myLength)]; // Somehow (m)alloc the memory which will be freed up in the following step ????? // Release the source, now that I've allocated the bytes [source release]; // Create a new data, recycling the bytes so they don't have to be copied NSData destination = [[NSData alloc] initWithBytesNoCopy:buffer length:myLength freeWhenDone:YES]; Thanks for the help!

    Read the article

  • Yaml::load_file acting different between development and production (Rails)

    - by James
    Hi, I am completely stumped at the nature of this problem. We export data from our application into a 'cleaned' YAML file (stripping out IDs, created_at etc). Then we (will) allow users to import these files back into the application - it is the import that is completely bugging me out. In development, YAML::load_file(params[:uploaded_data].local_path) returns an array of YAML::Objects's (and it doesn't matter which of the number of different ways the file could be loaded): [#{"exception_count"="0", "title"="Start", "amount"="70.00", "colour"=nil, "repeat_type_id"="0", "repeat_interval"="1"}}, etc etc] Which is very nice, as the attributes also include the (associated model) exceptions that you see an exception_count for. However on production (rails 2.3.2, running REE 1.8.7 and 1.8.6 for testing, tested on two different production env's, and running production locally) it returns an array of the Objects within the YAML - in this case, Event: [#, repeat_type_id: 0, colour: nil, repeat_interval: 1, exception_count: 0, etc etc] Now this would be just perplexing if it also included the associated model Exception with it - however it doesn't. Can anyone at all shed some light on why the Yaml parser would behave so differently between production and development? I'm on rails 2.3.2, running REE 1.8.7; however I've also tested running Ruby 1.8.6 with exactly the same results. Thanks for any help!

    Read the article

  • Web Hosting URL Length Limit?

    - by Isaac Waller
    Hello, I am designing a web application which is a tie in to my iPhone application. It sends massively large URLs to the web server (15000 about.) I was using NearlyFreeSpeech.net, but they only support URLS up to 2000 characters. I was wondering if anybody knows of web hosting that will support really large URLs? Thanks, Isaac Edit: My program needs to open a picture in Safari. I could do this 2 ways: send it base64 encoded in the URL and just echo the query parameters. first POST it to the server in my application, then the server would send back a unique ID after storing the photo in a database, which I would append to a URL which I would open in Safari which retrieved the photo from the database and delete it from the database. You see, I am lazy, and I know Mobile Safari can support URI's up to 80 000 characters, so I think this is a OK way to do it. If there is something really wrong with this, please tell me. Edit: I ended up doing it the proper POST way. Thanks.

    Read the article

  • NHibernate: how to handle entity-based validation using session-per-request pattern, without control

    - by Seth Petry-Johnson
    What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using: Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance. Load the entity with potentially invalid data, coming from the form post. Validate the entity by callings its IsValid() method. If valid, call IFooRepository.Save(entity). Otherwise, display error message. The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid. What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request? Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed. Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?") Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end. Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1] Con: I have to call IRepository.Save(entity)' and SaveChanges(). Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them. Pro: Most intuitive, given that controllers don't know about NH. Con: Does this defeat many of the benefits I'd get from NH?

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • casting Collection<SomeClass> to Collection<SomeSuperClass>

    - by skrebbel
    Hi all, I'm sure this has been answered before, but I really cannot find it. I have a java class SomeClass and an abstract class SomeSuperClass. SomeClass extends SomeSuperClass. Another abstract method has a method that returns a Collection<SomeSuperClass>. In an implementation class, I have a Collection<SomeClass> myCollection I understand that I cannot just return myCollection, because Collection<SomeClass> does not inherit from Collection<SomeSuperClass>. Nevertheless, I know that everything in myCollection is a SomeSuperClass because after all, they're SomeClass objects which extend SomeSuperClass. How can I make this work? I.e. I want public class A { private Collection<SomeClass> myCollection; public Collection<SomeSuperClass> getCollection() { return myCollection; //compile error! } } The only way I've found is casting via a non-generic type and getting unchecked warnings and whatnot. There must be a more elegant way, though? I feel that also using Collections.checkedSet() and friends are not needed, since it is statically certain that the returned collection only contains SomeClass objects (this would not be the case when downcasting instead of upcasting, but that's not what I'm doing). What am I missing? Thanks!

    Read the article

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • Pattern for limiting number of simultaneous asynchronous calls

    - by hitch
    I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing? Would something like this be appropriate (pseudocode, obviously)? private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>; public List<externalSystemObjects> GetObjects(List<string> ids) { int callCount = 0; int maxCallCount = 25; WaitHandle[] handles; foreach(id in itemIds to get) { if(callCount < maxCallCount) { WaitHandle handle = executeCall(id, callback); addWaitHandleToWaitArray(handle) } else { int returnedCallId = WaitHandle.WaitAny(handles); removeReturnedCallFromWaitHandles(handles); } } WaitHandle.WaitAll(handles); return returnedObjects; } public void callback(object result) { returnedObjects.Add(result); }

    Read the article

  • faster way to change xml to array(grails to flex)

    - by Anthony Umpad
    I have a large xml passed from grails to flex. When flex receives the xml, it converts the xml into an associative array object. Given the large xml file, it takes too long to complete the loop, is there any way in flex to make conversion faster? Below is my sample code. <xml> <car> <model>Vios</model> <type>Sedan</type> <color>Blue</color> </car> <car> <model>Camry</model> <type>Luxury</type> <color>Black</color> </car> </xml> *converted to the flex associative array below.* [Vios].type = Sedan .color = Blue [Camry].type = Luxury .color = Black *Below is a code I used in flex to convert the xml to the associative array object* var tempXML=xml.children() var tempArray:Array= new Array() for(var i:int=0;i<tempXML.length();i++) { tempArray[tempXML[i].@model]= new Object(); tempArray[tempXML[i].@model].color = tempXML[i][email protected](); tempArray[tempXML[i].@model].type = tempXML[i][email protected](); }

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >