Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 306/869 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Sync two SqlExpress using NHibernate

    - by Christian
    Hello, I am creating a simple project management system which uses NHibernate for object storage. The underlying database is SQL express (at least currently for development). The client runs on either the desktop or laptop. I know I could use web-services and store the DB only on the desktop, but this would force the desktop to be available all the time. I am currently thinking about duplicating the DB, having two instances with "different data". To clarify, we are not talking about a productive app here, its a prototype. One way to achieve this very simple would be the following process: Client: Check if desktop DB is available (through web service) Client: If yes, use desktop storage, no problem here Client: If not, use own DB as storage Client: Poll desktop regulary, as soon as it comes on, sync Client: Switch to desktop storage ... Desktop: Do not attempt any DB operation before checking for required sync Desktop: If sync needed, do it... My question is now, how would you sync? Assume 4 or 5 types of objects, all have GUID as identifiers. Would you always manually "lazy load" all objects of a certain type and feed them to the DB. Would you always drop the whole desktop DB in case the client DB may be newer and out of sync? Again, I want to stress out, I am not assuming any conflicts or stale data, I basically just want to "copy the whole DB from the client". Would you use NHibernate for this? Or would you separate the copy process? When I think about it, my questions comes down to this: Is there any function from NHibernate: SyncDBs_SourceWins_(SourceDB, TargetDB) Thanks for help, Chris

    Read the article

  • Animate and form rows, arrays, AS3

    - by VideoDnd
    Question How can I animate and form rows together? Explanation One 'for loop' is for animation, the other 'for loop' is for making rows. I want to understand how to use arrays and create a row of sprite animations. 'for loop' for animation //FRAMES ARRAY //THIS SETS UP MY ANIMATION FOR TIMER EVENT var frames:Array = [ new Frame1(), new Frame2(), new Frame3(), new Frame4(), new Frame5(), new Frame6(), new Frame7(), new Frame8(), new Frame9(), new Frame0(), ]; for each (var frame:Sprite in frames) { addChild(frame); } 'for loop' for rows //THIS MAKES A ROW OF DISPLAY OBJECTS var numberOfClips:Number = 11; var xStart:Number = 0; var yStart:Number = 0; var xVal:Number = xStart; var xOffset:Number = 2; for (var $:Number=0; $<numberOfClips; $++) { //DUDE ARRAY var dude:Array = frames; dude.y = yStart +11; dude.x = xVal +55; xVal = dude.x + dude.width + this.xOffset; } timer var timer:Timer = new Timer(100); timer.addEventListener(TimerEvent.TIMER, countdown); function countdown(event:TimerEvent) { var currentFrame:int = timer.currentCount % frames.length; for (var i:int = 0; i < frames.length; ++i) { frames[i].visible = (i == currentFrame); } } timer.start(); counter experiment My new class I'm working on loops through 10 different display objects that are numbers. For those following, I'm trying to make something like NumbersView.

    Read the article

  • Getting Outlook calendar items based on subject

    - by EKS
    I'm trying to get a list of calendar objects from exchange and sorting them based on subject. The part of of getting the objects just based on date and sorting out via subject is in "code" is now working, but i want to do a sort on subject in the "sql" first, but im unable to make it work ( Currently getting error from exchange saying the query is wrong. The line I added is: + "AND lcase(\"urn:schemas:calendar:subject\") = 'onsite%' " What I want is the ability to catch all appointments that start with onsite, both in upper and lower case. strQuery = "<?xml version=\"1.0\"?>" + "<g:searchrequest xmlns:g=\"DAV:\">" + "<g:sql>SELECT \"urn:schemas:calendar:location\", \"urn:schemas:httpmail:subject\", " + "\"urn:schemas:calendar:dtstart\", \"urn:schemas:calendar:dtend\", " + "\"urn:schemas:calendar:busystatus\", \"urn:schemas:calendar:instancetype\" " + "FROM Scope('SHALLOW TRAVERSAL OF \"" + strCalendarURI + "\"') " + "WHERE NOT \"urn:schemas:calendar:instancetype\" = 1 " + "AND \"DAV:contentclass\" = 'urn:content-classes:appointment' " + "AND \"urn:schemas:calendar:dtstart\" > '2003/06/01 00:00:00' " //'" + DateString + "'" + "AND lcase(\"urn:schemas:calendar:subject\") = 'onsite' " + "ORDER BY \"urn:schemas:calendar:dtstart\" ASC" + "</g:sql></g:searchrequest>";

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • Static Variables somehow maintaining state?

    - by gfoley
    I am working on an existing project, setup by another coder. I'm having some trouble understanding how state is being maintained between pages. There is a Class library which has some helper objects. Mostly these objects are just used for there static methods and rarely instantiated or inherited. This is an example class I'm testing with. public sealed class Application { public static string Test; } Now when i run something like the following in the base class of my page, I would expect the result to be "1: 2:Test" all the time (note that "1" is empty), but strangly its only this way the first time it is run. Then every time afterwards its "1:Test 2:Test". Somehow its maintaining the state of the static variable between pages and being refreshed?? Response.Write("1:" + SharedLibrary.Application.Test); SharedLibrary.Application.Test = "Test"; Response.Write(" 2:" + SharedLibrary.Application.Test); I need to create more classes like this, but want to understand why this is occurring in the first place. Many Thanks

    Read the article

  • Style first 2 TextViews in Android ListView differently

    - by Kurtis Nusbaum
    I have a ListView and I want the first two entries in it to be displayed differently than the rest. Nothing fancy, I want them all to just be text views. But the first two entries need to have different sizes and weights than the rest. I tried modifying the ArrayAdapter class like so: private class BusStopAdapter<T> extends ArrayAdapter<T>{ public BusStopAdapter( Context context, int textViewResourceId, List<T> objects) { super(context, textViewResourceId, objects); } public View getView(int position, View convertView, ViewGroup parent) { TextView toReturn = (TextView)super.getView(position, convertView, parent); if(position == 0){ toReturn.setTextSize(12); toReturn.setText("Previous Bus: " + toReturn.getText()); toReturn.setPadding(0,0,0,0); } else if(position == 1){ toReturn.setTextSize(20); toReturn.setPadding( toReturn.getPaddingLeft(), 0, toReturn.getPaddingRight(), 0 ); toReturn.setText("Next Bus: " + toReturn.getText()); toReturn.setGravity(Gravity.CENTER_HORIZONTAL|Gravity.TOP); } return toReturn; } } But this inadvertantly causes some of the other textviews to take on these special attributes. I think it's because cause textviews get "recycled" in the AbsListAdapter class.

    Read the article

  • How to provide an inline model field with a queryset choices without losing field value for inline r

    - by Judith Boonstra
    The code displayed below is providing the choices I need for the app field, and the choices I need for the attr field when using Admin. I am having a problem with the attr field on the inline form for already saved records. The attr selected for these saved does show in small print above the field, but not within the field itself. # MODELS: Class Vocab(models.Model): entity = models.Charfield, max_length = 40, unique = True) Class App(models.Model): name = models.ForeignKey(Vocab, related_name = 'vocab_appname', unique = True) app = SelfForeignKey('self, verbose_name = 'parent', blank = True, null = True) attr = models.ManyToManyField(Vocab, related_name = 'vocab_appattr', through ='AppAttr' def parqs(self): a method that provides a queryset consisting of available apps from vocab, excluding self and any apps within the current app's dependent line. def attrqs(self): a method that provides a queryset consisting of available attr from vocab excluding those already selected by current app, 2) those already selected by any apps within the current app's parent line, and 3) those selected by any apps within the current app's dependent line. Class AppAttr(models.Model): app = models.ForeignKey(App) attr = models.ForeignKey(Vocab) # FORMS: from models import AppAttr def appattr_form_callback(instance, field, *args, **kwargs) if field.name = 'attr': if instance: return field.formfield(queryset = instance.attrqs(), *kwargs) return field.formfield(*kwargs) # ADMIN: necessary imports class AppAttrInline(admin.TabularInline): model = AppAttr def get_formset(self, request, obj = None, **kwargs): kwargs['formfield_callback'] = curry(appattr_form_callback, obj) return super(AppAttrInline, self).get_formset(request, obj, **kwargs) class AppForm(forms.ModelForm): class Meta: model = App def __init__(self, *args, **kwargs): super(AppForm, self).__init__(*args, **kwargs) if self.instance.id is None: working = App.objects.all() else: thisrec = App.objects.get(id = self.instance.id) working = thisrec.parqs() self.fields['par'].queryset = working class AppAdmin(admin.ModelAdmin): form = AppForm inlines = [AppAttrInline,] fieldsets = .......... necessary register statements

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • How can I generate an "unlimited" world?

    - by snowlord
    I would like to create a game with an endless (in reality an extremely large) world in which the player can move about. Whether or not I will ever get around to implement the game is one matter, but I find the idea interesting and would like some input on how to do it. The point is to have a world where all data is generated randomly on-demand, but in a deterministic way. Currently I focus on a large 2D map from which it should be possible to display any part without knowledge about the surrounding parts. I have implemented a prototype by writing a function that gives a random-looking, but deterministic, integer given the x and y of a pixel on the map (see my recent question about this function). Using this function I populate the map with "random" values, and then I smooth the map using a simple filter based on the surrounding pixels. This makes the map dependent on a few pixels outside its edge, but that's not a big problem. The final result is something that at least looks like a map (especially with a good altitude color map). Given this, one could maybe first generate a coarser map which is used to generate bigger differences in altitude to create mountain ranges and seas. Anyway, that was my idea, but I am sure that there exist ways to do this already and I also believe that given the specification, many of you can come up with better ideas. EDIT: Forgot the link to my question.

    Read the article

  • Sending a JSON object to an ASP.NET web service using JQUERY ajax function

    - by uzay95
    I want to create object on the client side of aspx page. And i want to add functions to these javascript classes to make easier the life. Actually i can get and use the objects (derived from the server side classes) which returns from the services. When i wanted to send objects from the client by jquery ajax methods, i couldn't do it :) This is my javascript object: function ClassAndMark(_mark, _lesson){ this.Lesson = _lesson; this.Mark = _mark; } function Student(_name, _surname, _classAndMark){ this.Name = _name; this.SurName = _surname; this.ClassAndMark = _classAndMark; } JSClass.prototype.fSaveToDB(){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "/WS/SaveObject.asmx/fSaveToDB"), data: ????????????, dataType: "json" }); } Actually i don't know what should be definition of classes and methods on the Server side but i think: class ClassAndMark{ public string Lesson ; public string Mark ; } class Student{ public string Name ; public string SurName ; public ClassAndMark ClassAndMark ; } Web service is below but again i couldn't get what should be instead of the ???? : [WebMethod()] public void fSaveToDB(???? _obj) { // How can i convert input parameter/parameters // of method in the server side object? }

    Read the article

  • Android: CustomListAdapter

    - by primal
    Hi, I have implemented a custom list view which looks like the twitter timeline. adapter = new MyClickableListAdapter(this, R.layout.timeline, mObjectList); setListAdapter(adapter); The constructor for MyClickableListAdapter is as follows private class MyClickableListAdapter extends ClickableListAdapter{ public MyClickableListAdapter(Context context, int viewId, List objects) { super(context, viewId, objects); } ClickableListAdapter extends BaseAdapter and implements the necessary methods. The xml code for the list view is as follows <ListView android:id="@+id/android:list" android:layout_width="fill_parent" android:layout_height="wrap_content" /> This is what it looks like. I have 3 questions 1) I tried registering a context menu for the list view by adding the line after setting the list adapter registerforContextMenu(getListView()); But on long-click the menu doesnt get displayed. I cannot understand what I am doing wrong! 2) Is it possible to display a textview above the listview? I tried it by adding the code for textview above the listview. But then, only the textview gets displayed. 3) I have seen in many twitter clients that on clicking post a window pops up from the top covering only some portion of the screen and rest of the timeline is visible. How can this be done possibly without starting a new activity? Any help would be much appreciated..

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • What's the difference between DI and factory patterns?

    - by Anthony Short
    I have a class which depends on 3 classes, all 3 of which have other classes they rely on. Currently, I'm using a container class to build up all the required classes, inject them into one another and return the application. The simplified version of the container looks something like this: class Builder { private $_options; public function __construct($options) { $this->_options = $options; } public function build() { $cache = $this->getCache(); $response = $this->getResponse(); $engine = $this->getEngine(); return new Application($cache,$response,$engine); } public function getResponse() { $encoder = $this->getResponseEncoder(); $cache = $this->getResponseCache(); return new Response($encoder,$cache); } // Methods for building each object } I'm not sure if this would be classified as FactoryMethod or a DI Container. They both seem to solve the same problem in the same way - They build objects and inject dependencies. This container has some more complicated building methods, like loading observers and attaching them to observable objects. Should factories be doing all the building (loading extensions etc) and the DI container should use these factories to inject dependencies? That way the sub-packages, like Cache, Response etc, can each have their own specialised factories.

    Read the article

  • python os.mkfifo() for Windows

    - by user302099
    Hello. Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task): In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in Windows it doesn't exist. What can be done? (I must use file-objects). Some extra details: I have a python module (not written by me) that plays a certain game through stdin and stdout (using raw_input() and print). I also have a Windows executable playing the same game, through stdin and stdout as well. I want to make them play one against the other, and log all their communication. Here's the code I can write (the get_fifo() function is not implemented, because that's what I don't know to do it Windows): class Pusher(Thread): def __init__(self, source, dest, p1, name): Thread.__init__(self) self.source = source self.dest = dest self.name = name self.p1 = p1 def run(self): while (self.p1.poll()==None) and\ (not self.source.closed) and (not self.source.closed): line = self.source.readline() logging.info('%s: %s' % (self.name, line[:-1])) self.dest.write(line) self.dest.flush() exe_to_pythonmodule_reader, exe_to_pythonmodule_writer =\ get_fifo() pythonmodule_to_exe_reader, pythonmodule_to_exe_writer =\ get_fifo() p1 = subprocess.Popen(exe, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE) old_stdin = sys.stdin old_stdout = sys.stdout sys.stdin = exe_to_pythonmodule_reader sys.stdout = pythonmodule_to_exe_writer push1 = Pusher(p1.stdout, exe_to_pythonmodule_writer, p1, '1') push2 = Pusher(pythonmodule_to_exe_reader, p1.stdin, p1, '2') push1.start() push2.start() ret = pythonmodule.play() sys.stdin = old_stdin sys.stdout = old_stdout

    Read the article

  • Multithreaded linked list traversal

    - by Rob Bryce
    Given a (doubly) linked list of objects (C++), I have an operation that I would like multithread, to perform on each object. The cost of the operation is not uniform for each object. The linked list is the preferred storage for this set of objects for a variety of reasons. The 1st element in each object is the pointer to the next object; the 2nd element is the previous object in the list. I have solved the problem by building an array of nodes, and applying OpenMP. This gave decent performance. I then switched to my own threading routines (based off Windows primitives) and by using InterlockedIncrement() (acting on the index into the array), I can achieve higher overall CPU utilization and faster through-put. Essentially, the threads work by "leap-frog'ing" along the elements. My next approach to optimization is to try to eliminate creating/reusing the array of elements in my linked list. However, I'd like to continue with this "leap-frog" approach and somehow use some nonexistent routine that could be called "InterlockedCompareDereference" - to atomically compare against NULL (end of list) and conditionally dereference & store, returning the dereferenced value. I don't think InterlockedCompareExchangePointer() will work since I cannot atomically dereference the pointer and call this Interlocked() method. I've done some reading and others are suggesting critical sections or spin-locks. Critical sections seem heavy-weight here. I'm tempted to try spin-locks but I thought I'd first pose the question here and ask what other people are doing. I'm not convinced that the InterlockedCompareExchangePointer() method itself could be used like a spin-lock. Then one also has to consider acquire/release/fence semantics... Ideas? Thanks!

    Read the article

  • Questions regarding detouring by modifying the virtual table

    - by Elliott Darfink
    I've been practicing detours using the same approach as Microsoft Detours (replace the first five bytes with a jmp and an address). More recently I've been reading about detouring by modifying the virtual table. I would appreciate if someone could shed some light on the subject by mentioning a few pros and cons with this method compared to the one previously mentioned! I'd also like to ask about patched vtables and objects on the stack. Consider the following situation: // Class definition struct Foo { virtual void Call(void) { std::cout << "FooCall\n"; } }; // If it's GCC, 'this' is passed as the first parameter void MyCall(Foo * object) { std::cout << "MyCall\n"; } // In some function Foo * foo = new Foo; // Allocated on the heap Foo foo2; // Created on the stack // Arguments: void ** vtable, uint offset, void * replacement PatchVTable(*reinterpret_cast<void***>(foo), 0, MyCall); // Call the methods foo->Call(); // Outputs: 'MyCall' foo2.Call(); // Outputs: 'FooCall' In this case foo->Call() would end up calling MyCall(Foo * object) whilst foo2.Call() call the original function (i.e Foo::Call(void) method). This is because the compiler will try to decide any virtual calls during compile time if possible (correct me if I'm wrong). Does that mean it does not matter if you patch the virtual table or not, as long as you use objects on the stack (not heap allocated)?

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • problem in html, table class text stretching out.

    - by Andy
    Hey people, I've got a slight problem after weeks of html programming. I've got a large table which I use to construct tabs with, details don't really matter. On every tab there is a tab title (which is defined as one single cell in a table with a class from a .css) and under it there are other rows and columns for the table. sample code for the single table with the single cell in it: <table class='tabcontainer_title'><tr><td class='tabcontainer_title'>TEXT</td></tr></table> This table is again positioned in one cell of the table outside it, which has a different class 'tabcontainer_content' This is in the CSS: .tabcontainer_title{ background-color : #58af34; background-image : url(); text-align : right; vertical-align : top; margin-top : 0px; margin-right : 0px; margin-bottom : 0px; margin-left : 0px; padding-top : 5px; padding-right : 10px; padding-bottom : 5px; padding-left : 0px; font-size : 14px; font-style : normal; color : #000000; } .tabcontainer_content{ width : 100%; font-weight : bolder; background-color : #58af34; color : #000000; padding : 0px; border-collapse : collapse; } The problem I'm experiencing right now is that if there are like 3 rows in the table, which means there's a lot of emtpy space instead of those rows, the text in the tab title has a large margin from the top: but I haven't configured any margin to be present. When the table is full of rows though, hence the table is full, then the tab title holds no unnecessary empty space above the text. What am I missing here?

    Read the article

  • Django: Determining if a user has voted or not

    - by TheLizardKing
    I have a long list of links that I spit out using the below code, total votes, submitted by, the usual stuff but I am not 100% on how to determine if the currently logged in user has voted on a link or not. I know how to do this from within my view but do I need to alter my below view code or can I make use of the way templates work to determine it? I have read http://stackoverflow.com/questions/1528583/django-vote-up-down-method but I don't quite understand what's going on ( and don't need any ofjavascriptery). Models (snippet): class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Views (snippet): def hot(request): links = Link.objects.select_related().annotate(votes=Count('vote')).order_by('-created') for link in links: delta_in_hours = (int(datetime.now().strftime("%s")) - int(link.created.strftime("%s"))) / 3600 link.popularity = ((link.votes - 1) / (delta_in_hours + 2)**1.5) if request.user.is_authenticated(): try: link.voted = Vote.objects.get(link=link, user=request.user) except Vote.DoesNotExist: link.voted = None links = sorted(links, key=lambda x: x.popularity, reverse=True) links = paginate(request, links, 15) return direct_to_template( request, template = 'links/link_list.html', extra_context = { 'links': links, }) The above view actually accomplishes what I need but in what I believe to be a horribly inefficient way. This causes the dreaded n+1 queries, as it stands that's 33 queries for a page containing just 29 links while originally I got away with just 4 queries. I would really prefer to do this using Django's ORM or at least .extra(). Any advice?

    Read the article

  • Determining if an unordered vector<T> has all unique elements

    - by Hooked
    Profiling my cpu-bound code has suggested I that spend a long time checking to see if a container contains completely unique elements. Assuming that I have some large container of unsorted elements (with < and = defined), I have two ideas on how this might be done: The first using a set: template <class T> bool is_unique(vector<T> X) { set<T> Y(X.begin(), X.end()); return X.size() == Y.size(); } The second looping over the elements: template <class T> bool is_unique2(vector<T> X) { typename vector<T>::iterator i,j; for(i=X.begin();i!=X.end();++i) { for(j=i+1;j!=X.end();++j) { if(*i == *j) return 0; } } return 1; } I've tested them the best I can, and from what I can gather from reading the documentation about STL, the answer is (as usual), it depends. I think that in the first case, if all the elements are unique it is very quick, but if there is a large degeneracy the operation seems to take O(N^2) time. For the nested iterator approach the opposite seems to be true, it is lighting fast if X[0]==X[1] but takes (understandably) O(N^2) time if all the elements are unique. Is there a better way to do this, perhaps a STL algorithm built for this very purpose? If not, are there any suggestions eek out a bit more efficiency?

    Read the article

  • How can I accelerate the generation of the an MD5 Checksum within vb.net?

    - by Richard
    I'm working with some very large files residing on P2 (Panasonic) cards. Part of the process we employ is to first generate a checksum of the file we are going to copy, then copy the file, then run a checksum on the file to confirm that it copied OK. The problem is, is that files are large (70 GB+) and take a long time to complete. It's an issue since we will eventually be dealing with thousands of these files. I would like to find a faster way to generate the checksum other than using the System.Security.Cryptography.MD5CryptoServiceProvider I don't care if this means using a specialized hardware card, provided it works and is not to ungodly expensive. I would prefer to have a method of encoding that provided some feedback as to how far the process has gone along so I can display it like I do now. The application is written in vb.net. I would prefer to be able to use it as component, library, reference within my application, but I'm willing to call an outside application if there is enough improvement in the speed of generating the checksum. Needless to say, the checksum must be consistent and correct. :-) Thank you in advance for your time and efforts, Richard

    Read the article

  • Uniquely identify files/folders in NTFS, even after move/rename

    - by Felix Dombek
    I haven't found a backup (synchronization) program which does what I want so I'm thinking about writing my own. What I have now does the following: It goes through the data in the source and for every file which has its archive bit set OR does not exist in the destination, copies it to the destination, overwriting a possibly existing file. When done, it checks for all files in the destination if it exists in the source, and if it doesn't, deletes it. The problem is that if I move or rename a large folder, it first gets copied to the destination even though it is in principle already there, just has a different path. Then the folder which was already there is deleted afterwards. Apart from the unnecessary copying, I frequently run into space problems because my backup drive isn't large enough to hold the original data twice. Is there a way to programmatically identify such moved/renamed files or folders, i.e. by NTFS ID or physical location on media or something else? Are there solutions to this problem? I do not care about the programming language, but hints for doing this with Python, C++, C#, Java or Prolog are appreciated.

    Read the article

  • Are bad data issues that common?

    - by Water Cooler v2
    I've worked for clients that had a large number of distinct, small to mid-sized projects, each interacting with each other via properly defined interfaces to share data, but not reading and writing to the same database. Each had their own separate database, their own cache, their own file servers/system that they had dedicated access to, and so they never caused any problems. One of these clients is a mobile content vendor, so they're lucky in a way that they do not have to face the same problems that everyday business applications do. They can create all those separate compartments where their components happily live in isolation of the others. However, for many business applications, this is not possible. I've worked with a few clients, one of whose applications I am doing the production support for, where there are "bad data issues" on an hourly basis. Yeah, it's that crazy. Some data records from one of the instances (lower than production, of course) would have been run a couple of weeks ago, and caused some other user's data to get corrupted. And then, a data script will have to be written to fix this issue. And I've seen this happening so much with this client that I have to ask. I've seen this happening at a moderate rate with other clients, but this one just seems to be out of order. If you're working with business applications that share a large amount of data by reading and writing to/from the same database, are "bad data issues" that common in your environment?

    Read the article

  • c++ struct size

    - by kiokko89
    struct CExample { int a; } int main(int argc, char* argv[]) { CExample ce; CExample ce2; cout << "Size:" << sizeof(ce)<< " Address: "<< &ce<< endl; cout << "Size:" << sizeof(ce2)<< " Address: "<< &ce2 << endl; CExample ceArr[2]; cout << "Size:" << sizeof(ceArr[0])<< " Address: "<<&ceArr[0]<<endl; cout << "Size:" << sizeof(ceArr[1])<< " Address: "<<&ceArr[1]<<endl; return 0; } Excuse me I'm just a beginner but i'd like to know why with this code, there is a difference of 12 bytes between the addresses of the first two objects(ce and ce2) (i thought about data allignment), but there is only a difference of 4 bytes between the two objects in the array. Sorry for my bad English...

    Read the article

  • Returning XML natively in a .NET (C#) webservice?

    - by James McMahon
    I realize that SOAP webservices in .NET return XML representation of whatever object the web method returns, but if I want to return data formatting in XML what is the best object to store it in? I am using the answer to this question to write my XML, here is the code: XmlWriter writer = XmlWriter.Create(pathToOutput); writer.WriteStartDocument(); writer.WriteStartElement("People"); writer.WriteStartElement("Person"); writer.WriteAttributeString("Name", "Nick"); writer.WriteEndElement(); writer.WriteStartElement("Person"); writer.WriteStartAttribute("Name"); writer.WriteValue("Nick"); writer.WriteEndAttribute(); writer.WriteEndElement(); writer.WriteEndElement(); writer.WriteEndDocument(); writer.Flush(); Now I can return this output as a String to my calling webmethod, but it shows up as <string> XML HERE </string>, is there anyway to just return the full xml? Please in your answer, give an example of how to use said object with either XmlWriter or another internal object (if you consider XmlWriter to be a poor choice). The System.Xml package (namespace) has many objects, but I haven't been able to uncover decent documentation on how to use the objects together, or what to use for what situations.

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >