Search Results

Search found 5153 results on 207 pages for 'unique ptr'.

Page 164/207 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Python c_types .dll functions (pari library)

    - by silinter
    Alright, so a couple days ago I decided to try and write a primitive wrapper for the PARI library. Ever since then I've been playing with ctypes library in loading the dll and accessing the functions contained using code similar to the following: from ctypes import * libcyg=CDLL("<path/cygwin1.dll") #It needs cygwin to be loaded. Not sure why. pari=CDLL("<path>/libpari-gmp-2.4.dll") print pari.fibo #fibonacci function #prints something like "<_FuncPtr object at 0x00BA5828>" So the functions are there and they can potentially be accessed, but I always recieve an access violation no matter what I try. For example: pari.fibo(5) #access violation pari.fibo(c_int(5)) #access violation pari.fibo.argtypes=[c_long] #setting arguments manually pari.fibo.restype=long #set the return type pari.fibo(byref(c_int(5))) #access violation reading 0x04 consistently and any variation on that, including setting argtypes to receive pointers. The Pari .dll is written in C and the fibonacci function's syntax within the library is GEN fibo(long x) (docs @http://pari.math.u-bordeaux.fr/dochtml/html/Arithmetic_functions.html#fibonacci, I need more rep it seems). Could it be the return type that's causing these errors, as it is not a standard int or long but a GEN type, which is unique to the PARI library? Any help would be appreciated. If anyone is able to successfully load the library and use ANY function from within python, please tell; I've been at this for hours now.

    Read the article

  • Django Cannot set values on a ManyToManyField which specifies an intermediary model

    - by dana
    i am using a m2m and a through table, and when i was trying to save, my error was: Cannot set values on a ManyToManyField which specifies an intermediary model so, i've modified my code, so that when i save the form, to insert data into the 'through' table too.But now, i'm having another error. (i've bolded the lines where i think i am wrong) i have in models.py: class Classroom(models.Model): user = models.ForeignKey(User, related_name = 'classroom_creator') classname = models.CharField(max_length=140, unique = True) date = models.DateTimeField(auto_now=True) open_class = models.BooleanField(default=True) members = models.ManyToManyField(User,related_name="list of invited members", through = 'Membership') class Membership(models.Model): accept = models.BooleanField(User) date = models.DateTimeField(auto_now = True) classroom = models.ForeignKey(Classroom, related_name = 'classroom_membership') member = models.ForeignKey(User, related_name = 'user_membership') and in def save_classroom(request): if request.method == 'POST': form = ClassroomForm(request.POST, request.FILES, user = request.user) **classroom_instance = Classroom member_instance = Membership** if form.is_valid(): new_obj = form.save(commit=False) new_obj.user = request.user r = Relations.objects.filter(initiated_by = request.user) membership = Membership.objects.create(**classroom = classroom_instance, member = member_instance,date=datetime.datetime.now())** new_obj.save() form.save_m2m() return HttpResponseRedirect('/classroom/classroom_view/{{user}}/') else: form = ClassroomForm(user = request.user) return render_to_response('classroom/classroom_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to initialise okay the classroom_instance and menber_instance.My error os: Cannot assign "": "Membership.classroom" must be a "Classroom" instance. Thanks!

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

  • CharField values disappearing after save (readonly field)

    - by jamida
    I'm implementing simple "grade book" application where the teacher would be able to update the grades w/o being allowed to change the students' names (at least not on the update grade page). To do this I'm using one of the read-only tricks, the simplest one. The problem is that after the SUBMIT the view is re-displayed with 'blank' values for the students. I'd like the students' names to re-appear. Below is the simplest example that exhibits this problem. (This is poor DB design, I know, I've extracted just the relevant parts of the code to showcase the problem. In the real example, student is in its own table but the problem still exists there.) models.py class Grade1(models.Model): student = models.CharField(max_length=50, unique=True) finalGrade = models.CharField(max_length=3) class Grade1OForm(ModelForm): student = forms.CharField(max_length=50, required=False) def __init__(self, *args, **kwargs): super(Grade1OForm,self).__init__(*args, **kwargs) instance = getattr(self, 'instance', None) if instance and instance.id: self.fields['student'].widget.attrs['readonly'] = True self.fields['student'].widget.attrs['disabled'] = 'disabled' def clean_student(self): instance = getattr(self,'instance',None) if instance: return instance.student else: return self.cleaned_data.get('student',None) class Meta: model=Grade1 views.py from django.forms.models import modelformset_factory def modifyAllGrades1(request): gradeFormSetFactory = modelformset_factory(Grade1, form=Grade1OForm, extra=0) studentQueryset = Grade1.objects.all() if request.method=='POST': myGradeFormSet = gradeFormSetFactory(request.POST, queryset=studentQueryset) if myGradeFormSet.is_valid(): myGradeFormSet.save() info = "successfully modified" else: myGradeFormSet = gradeFormSetFactory(queryset=studentQueryset) return render_to_response('grades/modifyAllGrades.html',locals()) template <p>{{ info }}</p> <form method="POST" action=""> <table> {{ myGradeFormSet.management_form }} {% for myform in myGradeFormSet.forms %} {# myform.as_table #} <tr> {% for field in myform %} <td> {{ field }} {{ field.errors }} </td> {% endfor %} </tr> {% endfor %} </table> <input type="submit" value="Submit"> </form>

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • Thumbnails fadein fadeout specific div fade issues

    - by Omikron
    I am using this code to hide and show a div based on which thumbnail you rollover; $(document).ready(function(){ $('div.infodiv').hide(); $(".website_thumbs a").hover( function(){ var name = $(this).attr("name"); $(".infodiv").stop(); $("."+name).fadeIn(); }, function(){ var name = $(this).attr("name"); $("."+name).fadeTo(7000,1).fadeOut(); }); }); The script gets the name attribute from the thumbnail and displays the div with the corresponding class. Each div shares the .infodiv class but also has a class unique to each thumbnail. The functionality is basically where I want it but when you scroll over the thumbnails fast some of the divs get stuck in a kind of half faded-in state and stop working unless i roll over them once - then they slow fade in and they are usable again. I am a bit new to jQuery and would appreciate any help.

    Read the article

  • RESTfully Nesting Resource Routes with Single Identifiers

    - by Craig Walker
    In my Rails app I have a fairly standard has_many relationship between two entities. A Foo has zero or more Bars; a Bar belongs to exactly one Foo. Both Foo and Bar are identified by a single integer ID value. These values are unique across all of their respective instances. Bar is existence dependent on Foo: it makes no sense to have a Bar without a Foo. There's two ways to RESTfully references instances of these classes. Given a Foo.id of "100" and a Bar.id of "200": Reference each Foo and Bar through their own "top-level" URL routes, like so: /foo/100 /bar/200 Reference Bar as a nested resource through its instance of Foo: /foo/100 /foo/100/bar/200 I like the nested routes in #2 as it more closely represents the actual dependency relationship between the entities. However, it does seem to involve a lot of extra work for very little gain. Assuming that I know about a particular Bar, I don't need to be told about a particular Foo; I can derive that from the Bar itself. In fact, I probably should be validating the routed Foo everywhere I go (so that you couldn't do /foo/150/bar/200, assuming Bar 200 is not assigned to Foo 150). Ultimately, I don't see what this brings me. So, are there any other arguments for or against these two routing schemes?

    Read the article

  • Google's Oauth for Installed apps vs. Oauth for Web Apps

    - by burgerguy
    So I'm having trouble understanding something... If you do Oauth for Web Apps, you register your site with a callback URL and get a unique consumer secret key. But once you've obtained an Oauth for Web Apps token, you don't have to generate Oauth calls to the google server from your registered domain. I regularly use my key and token from scripts running via an apache server at localhost on my laptop and Google never says "you're not sending this request from the registered domain." It just sends me the data. Now, as I understand it, if you do Oauth for Installed Apps, you use "anonymous" instead of a secret key you got from Google. I've been thinking of just using the OAuth for Web Apps auth method, then passing that token to an installed app that has my secret code embedded in its innards. The worry is that the code could be discovered by bad people. But what's more secure... making them work for the secret code or letting them default to anonymous? What really goes bad if the "secret" is discovered when the alternative is using "anonymous" as the secret?

    Read the article

  • How to query collections in NHibernate

    - by user305813
    Hi, I have a class: public class User { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IDictionary<string, string> Attributes { get; set; } } and a mapping file: <class name="User" table="Users"> <id name="Id"> <generator class="hilo"/> </id> <property name="Name"/> <map name="Attributes" table="UserAttributes"> <key column="UserId"/> <index column="AttributeName" type="System.String"/> <element column="Attributevalue" type="System.String"/> </map> </class> So now I can add many attributes and values to a User. How can I query those attributes so I can get ie. Get all the users where attributename is "Age" and attribute value is "20" ? I don't want to do this in foreach because I may have millions of users each having its unique attributes. Please help

    Read the article

  • How to group data changes by operation with MySQL triggers

    - by Jan-Henk
    I am using triggers in MySQL to log changes to the data. These changes are recorded on a row level. I can now insert an entry in my log table for each row that is changed. However, I also need to record the operation to which the changes belong. For example, a delete operation like "DELETE * FROM table WHERE type=x" can delete multiple rows. With the trigger I can insert an entry for each deleted row into the log table, but I would like to also provide a unique identifier for the operation as a whole, so that the log table looks something like: log_id operation_id tablename fieldname oldvalue newvalue 1 1 table id 1 null 2 1 table type a null 3 1 table id 2 null 4 1 table type a null 5 2 table id 3 null 6 2 table type b null 7 2 table id 4 null 8 2 table type b null Is there a way in MySQL to identify the higher level operation to which the row changes belong? Or is this only possible by means of application level code? In the future it would also be nice to be able to record the transaction to which an operation belongs. Another question is if it is possible to capture the actual SQL query, besides using the query log. I don't think so myself, but maybe I am missing something. It is of course possible to capture these at the application level, but the goal is to keep intrusions to the application level code as minimal as possible. When this is not possible with MySQL, how is this with other database systems? For the current project it is not an option to use something other than MySQL, but it would be nice to know for future projects.

    Read the article

  • How can I filter these Django records?

    - by mipadi
    I have a set of Django models as shown in the following diagram (the names of the reverse relationships are shown in the yellow bubbles): In each relationship, a Person may have 0 or more of the items. Additionally, the slug field is (unfortunately) not unique; multiple Person records may have the same slug fields. Essentially these records are duplicates. I want to obtain a list of all records that meet the following criteria: All duplicate records (that is, having the same slug) with at least one Entry OR at least one Audio OR at least one Episode OR at least one Article. So far, I have the following query: Person.objects.values('slug').annotate(num_records=Count('slug')).filter(num_records__gt=1) This groups all records by slug, then adds a num_records attribute that says how many records have that slug, but the additional filtering is not performed (and I don't even know if this would work right anyway, since, given a set of duplicate records, one may have, e.g., and Entry and the other may have an Article). In a nutshell, I want to find all duplicate records and collapse them, along with their associated models, into one record. What's the best way to do this with Django?

    Read the article

  • Delphi Application using COMMIT and ROLLBACK for Multiple SQL Updates

    - by Matt
    Is it possible to use the SQL BEGIN TRANSACTION, COMMIT TRANSACTION, ROLLBACK TRANSACTION when embedding SQL Queries into an application with mutiple calls to the SQL for Table Updates. For example I have the following code: Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; Q.Close; Q.SQL.Clear; Q.SQL.ADD(<Select Some Data>); Q.Open; Set Some Variables Q.Close; Q.SQL.Clear; Q.SQL.ADD(<UPDATE A RECORD>); Q.ExecSQL; What I would like to do is if the second update fails I want to roll back the first transaction. If I set a unique notation for the BEGIN, COMMIT, ROLLBACK so as to specify what is being committed or rolled back, is it feasible. i.e. before the first Update specify BEGIN TRANSACTION_A then after the last update specify COMMIT TRANSACTION_A I hope that makes sense. If I was doing this in a SQL Stored Procedure then I would be able to specify this at the start and end of the procedure, but I have had to break the code down into manageable chunks due to process blocks and deadlocks on a heavy loaded SQL Server.

    Read the article

  • Quering container with Linq + group by ?

    - by Prix
    public class ItemList { public int GuID { get; set; } public int ItemID { get; set; } public string Name { get; set; } public entityType Status { get; set; } public class Waypoint { public int Zone { get; set; } public int SubID { get; set; } public int Heading { get; set; } public float PosX { get; set; } public float PosY { get; set; } public float PosZ { get; set; } } public List<Waypoint> Routes = new List<Waypoint>(); } I have a list of items using the above class and now I need to group it by ItemID and join the first entry of Routes of each iqual ItemID. So for example, let's say on my list I have: GUID ItemID ListOfRoutes 1 23 first entry only 2 23 first entry only 3 23 first entry only 4 23 first entry only 5 23 first entry only 6 23 first entry only 7 23 first entry only Means I have to group entries 1 to 7 as 1 Item with all the Routes entries. So I would have one ItemID 23 with 7 Routes on it where those routes are the first element of that given GUID Routes List. My question is if it is possible using LINQ to make a statment to do something like that this: var query = from ItemList entry in myList where status.Contains(entry.Status) group entry by entry.ItemID into result select new { items = new { ID = entry.ItemID, Name = entry.Name }, routes = from ItemList m in entry group m.Routes.FirstOrDefault() by n.NpcID into m2 }; So basicly I would have list of unique IDS information with a inner list of all the first entry of each GUID route that had the same ItemID.

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • How do you go about finding out whether an idea you've had has already been patented?

    - by Iain Fraser
    I have an idea for image copy-protection that I'm in the process of coding up and plan on selling to one of my clients who sells images online. If successful I think there would be a lot of people in a similar situation to my client who would be interested in the code also. I think this is a fairly unique idea that could be packaged into a saleable product - but if I did do this, I wouldn't want some big corporation decending on me with their lawyers after all my hard work. So before I put too much work into this I'd really like to know how I'd go about finding if this idea has been patented already and whether I'd get in trouble if I sold my product and if it would be worthwhile patenting the idea myself. Although I find the idea of software patenting abhorrent, it would be more to protect myself from the usual suspects than to stop fellow-developers from using the idea (if it is in fact a worthwhile one). I live in Australia, so an idea of who to go and see and a ball park figure of how much money I'd be looking at having to pay would be fantastic (in orders of a magnitude: 100s, 1000s, 10s of thousands of dollars, etc). Cheers Iain

    Read the article

  • Do programmers need a union?

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • What would be the time complexity of counting the number of all structurally different binary trees?

    - by ktslwy
    Using the method presented here: http://cslibrary.stanford.edu/110/BinaryTrees.html#java 12. countTrees() Solution (Java) /** For the key values 1...numKeys, how many structurally unique binary search trees are possible that store those keys? Strategy: consider that each value could be the root. Recursively find the size of the left and right subtrees. */ public static int countTrees(int numKeys) { if (numKeys <=1) { return(1); } else { // there will be one value at the root, with whatever remains // on the left and right each forming their own subtrees. // Iterate through all the values that could be the root... int sum = 0; int left, right, root; for (root=1; root<=numKeys; root++) { left = countTrees(root-1); right = countTrees(numKeys - root); // number of possible trees with this root == left*right sum += left*right; } return(sum); } } I have a sense that it might be n(n-1)(n-2)...1, i.e. n!

    Read the article

  • How to delete duplicate vectors within a multidimensional vector?

    - by David
    I have a vector of vectors: vector< vector<int> > BigVec; It contains an arbitrary number of vectors, each of an arbitrary size. I want to delete not duplicate elements of each vector, but any vectors that are the exact same as another. I don't need to preserve the order of the vectors so I can sort etc.. It should be a really simple problem to solve but I'm new to this, my (not-working) best effort: for (int i = 0; i < BigVec.size(); i++) { for (int j = 1; j < BigVec.size() ; j++ ) { if (BigVec[i][0] == BigVec [j][i]); { BigVec.erase(BigVec.begin() + j); i = 0; // because i get the impression deleting a j = 1; // vector messes up a simple iteration through } } } I think there might be a solution using Unique(), but I can't get that to work either.

    Read the article

  • Random String Generator creates same string on multiple calls

    - by rockinthesixstring
    Hi there. I've build a random string generator but I'm having a problem whereby if I call the function multiple times say in a Page_Load method, the function returns the same string twice. here's the code ''' <summary>' ''' Generates a Random String' ''' </summary>' ''' <param name="n">number of characters the method should generate</param>' ''' <param name="UseSpecial">should the method include special characters? IE: # ,$, !, etc.</param>' ''' <param name="SpecialOnly">should the method include only the special characters and excludes alpha numeric</param>' ''' <returns>a random string n characters long</returns>' Public Function GenerateRandom(ByVal n As Integer, Optional ByVal UseSpecial As Boolean = True, Optional ByVal SpecialOnly As Boolean = False) As String Dim chars As String() ' a character array to use when generating a random string' Dim ichars As Integer = 74 'number of characters to use out of the chars string' Dim schars As Integer = 0 ' number of characters to skip out of the characters string' chars = { _ "A", "B", "C", "D", "E", "F", _ "G", "H", "I", "J", "K", "L", _ "M", "N", "O", "P", "Q", "R", _ "S", "T", "U", "V", "W", "X", _ "Y", "Z", "0", "1", "2", "3", _ "4", "5", "6", "7", "8", "9", _ "a", "b", "c", "d", "e", "f", _ "g", "h", "i", "j", "k", "l", _ "m", "n", "o", "p", "q", "r", _ "s", "t", "u", "v", "w", "x", _ "y", "z", "!", "@", "#", "$", _ "%", "^", "&", "*", "(", ")", _ "-", "+"} If Not UseSpecial Then ichars = 62 ' only use the alpha numeric characters out of "char"' If SpecialOnly Then schars = 62 : ichars = 74 ' skip the alpha numeric characters out of "char"' Dim rnd As New Random() Dim random As String = String.Empty Dim i As Integer = 0 While i < n random += chars(rnd.[Next](schars, ichars)) System.Math.Max(System.Threading.Interlocked.Increment(i), i - 1) End While rnd = Nothing Return random End Function but if I call something like this Dim str1 As String = GenerateRandom(5) Dim str2 As String = GenerateRandom(5) the response will be something like this g*3Jq g*3Jq and the second time I call it, it will be 3QM0$ 3QM0$ What am I missing? I'd like every random string to be generated as unique.

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • nhibernate error recovery

    - by Berryl
    I downloaded Rhino Security today and started going through some of the tests. Several that run perfectly in isolation start getting errors after one that purposely raises an exception runs though. Here is that test: [Test] public void EntitesGroup_CanCreate() { var group = _authorizationRepository.CreateEntitiesGroup("Accounts"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<EntitiesGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } And here are the tests and error messages that fail: [Test] public void User_CanSave() { var ayende = new User {Name = "ayende"}; _session.Save(ayende); _session.Flush(); _session.Evict(ayende); var fromDb = _session.Get<User>(ayende.Id); Assert.That(fromDb, Is.Not.Null); Assert.That(ayende.Name, Is.EqualTo(fromDb.Name)); } ----> System.Data.SQLite.SQLiteException : Abort due to constraint violation column Name is not unique [Test] public void UsersGroup_CanCreate() { var group = _authorizationRepository.CreateUsersGroup("Admininstrators"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<UsersGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } failed: NHibernate.AssertionFailure : null id in Rhino.Security.Tests.User entry (don't flush the Session after an exception occurs) Does anyone see how I can reset the state of the in memory SQLite db after the first test? I changed the code to use nunit instead of xunit so maybe that is part of the problem here as well. Cheers, Berryl

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >