Search Results

Search found 5527 results on 222 pages for 'unique constraint'.

Page 133/222 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • data structure problems

    - by Ashish
    hey guys, please help me in finding the solution to some of these Amazon questions: given a file containing approx 10 million words, design a data structure for finding the anagrams Write a program to display the ten most frequent words in a file such that your program be efficient in all complexity measures. you have a file with millions of lines of data. Only two lines are identical; the rest are all unique. Each line is so long that it may not even fit in the memory. What is the most efficient solution for finding the identical lines?

    Read the article

  • detecting circular imports

    - by wallacoloo
    I'm working with a project that contains about 30 unique modules. It wasn't designed too well, so it's common that I create circular imports when adding some new functionality to the project. Of course, when I add the circular import, I'm unaware of it. Sometimes it's pretty obvious I've made a circular import when I get an error like AttributeError: 'module' object has no attribute 'attribute' where I clearly defined 'attribute'. But other times, the code doesn't throw exceptions because of the way it's used. So, to my question: Is it possible to programmatically detect when and where a circular import is occuring?

    Read the article

  • Curve fitting: Find the smoothest function that satisfies a list of constraints.

    - by dreeves
    Consider the set of non-decreasing surjective (onto) functions from (-inf,inf) to [0,1]. (Typical CDFs satisfy this property.) In other words, for any real number x, 0 <= f(x) <= 1. The logistic function is perhaps the most well-known example. We are now given some constraints in the form of a list of x-values and for each x-value, a pair of y-values that the function must lie between. We can represent that as a list of {x,ymin,ymax} triples such as constraints = {{0, 0, 0}, {1, 0.00311936, 0.00416369}, {2, 0.0847077, 0.109064}, {3, 0.272142, 0.354692}, {4, 0.53198, 0.646113}, {5, 0.623413, 0.743102}, {6, 0.744714, 0.905966}} Graphically that looks like this: We now seek a curve that respects those constraints. For example: Let's first try a simple interpolation through the midpoints of the constraints: mids = ({#1, Mean[{#2,#3}]}&) @@@ constraints f = Interpolation[mids, InterpolationOrder->0] Plotted, f looks like this: That function is not surjective. Also, we'd like it to be smoother. We can increase the interpolation order but now it violates the constraint that its range is [0,1]: The goal, then, is to find the smoothest function that satisfies the constraints: Non-decreasing. Tends to 0 as x approaches negative infinity and tends to 1 as x approaches infinity. Passes through a given list of y-error-bars. The first example I plotted above seems to be a good candidate but I did that with Mathematica's FindFit function assuming a lognormal CDF. That works well in this specific example but in general there need not be a lognormal CDF that satisfies the constraints.

    Read the article

  • Suggest Cassandra data model for an existing schema

    - by Andriy Bohdan
    Hello guys! I hope there's someone who can help me suggest a suitable data model to be implemented using nosql database Apache Cassandra. More of than I need it to work under high loads and large amounts of data. Simplified I have 3 types of objects: Product Tag ProductTag Product: key - string key name - string .... - some other fields Tag: key - string key name - unique tag words ProductTag: product_key - foreign key referring to product tag_key - foreign key referring to tag rating - this is rating of tag for this product Each product may have 0 or many tags. Tag may be assigned to 1 or many products. Means relation between products and tags is many-to-many in terms of relational databases. Value of "rating" is updated "very" often. I need to be run the following queries Select objects by keys Select tags for product ordered by rating Select products by tag order by rating Update rating by product_key and tag_key The most important is to make these queries really fast on large amounts of data, considering that rating is constantly updated.

    Read the article

  • Instantiate type variable in Haskell

    - by danportin
    EDIT: Solved. I was unware that enabling a language extension in the source file did not enable the language extension in GHCi. The solution was to :set FlexibleContexts in GHCi. I recently discovered that type declarations in classes and instances in Haskell are Horn clauses. So I encoded the arithmetic operations from The Art of Prolog, Chapter 3, into Haskell. For instance: fac(0,s(0)). fac(s(N),F) :- fac(N,X), mult(s(N),X,F). class Fac x y | x -> y instance Fac Z (S Z) instance (Fac n x, Mult (S n) x f) => Fac (S n) f pow(s(X),0,0) :- nat(X). pow(0,s(X),s(0)) :- nat(X). pow(s(N),X,Y) :- pow(N,X,Z), mult(Z,X,Y). class Pow x y z | x y -> z instance (N n) => Pow (S n) Z Z instance (N n) => Pow Z (S n) (S Z) instance (Pow n x z, Mult z x y) => Pow (S n) x y In Prolog, values are insantiated for (logic) variable in a proof. However, I don't understand how to instantiate type variables in Haskell. That is, I don't understand what the Haskell equivalent of a Prolog query ?-f(X1,X2,...,Xn) is. I assume that :t undefined :: (f x1 x2 ... xn) => xi would cause Haskell to instantiate xi, but this gives a Non type-variable argument in the constraint error, even with FlexibleContexts enabled.

    Read the article

  • Log4net Logging Problem : Very simple file appender logging not working

    - by contactmatt
    Here's my web.config information <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/> </configSections> <log4net> <root> <level value="ALL" /> </root> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="c:\temp\log-file.txt" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="1MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.SimpleLayout" /> </appender> </log4net> ... Here's the code that initalizes the logger protected void SendMessage() { log4net.Config.XmlConfigurator.Configure(); ILog log = LogManager.GetLogger(typeof(Contact)); ... log.Info("here we go!"); log.Debug("debug afasf"); ... } it doesn't work, no matter what I seem to do. I am referencing the 'log4net.dll' correctly, and by debugging the application i can see that the log object is getting initiated properly. This is a asp.net 3.5 framework web project. Any ideas/suggestions? I thought originally this error may be due to a file write permission constraint, but that doesn't seem to be the case (or so I think).

    Read the article

  • Using boost unordered map

    - by Amrish
    Guys, I am using dynamic programming approach to solve a problem. Here is a brief overview of the approach Each value generated is identified using 25 unique keys. I use the boost::hash_combine to generate the seed for the hash table using these 25 keys. I store the values in a hash table declared as boost::unordered_map<Key_Object, Data_Object, HashFunction> hashState; I did a time profiling on my algorithm and found that nearly 95% of the run time is spent towards retrieving/inserting data into the hash table. These were the details of my hash table hashState.size() 1880 hashState.load_factor() 0.610588 hashState.bucket_count() 3079 hashState.max_size() 805306456 hashState.max_load_factor() 1 hashState.max_bucket_count() 805306457 I have the following two questions Is there anything which I can do to improve the performance of the Hash Table's insert/retrieve operations? C++ STL has hash_multimap which would also suit my requirement. How does boost libraries unordered_map compare with hash_multimap in terms of insert/retrieve performance.

    Read the article

  • How to create multiple tables with the same schema using SQLite jdbc

    - by Space_C0wb0y
    I want to split a large table horizontally, and I would like to make sure that all three of them have the same schema. Currently I am using this piece of code to create the tables: statement .executeUpdate("CREATE TABLE AnnotationsMolecularFunction (Id INTEGER PRIMARY KEY ASC AUTOINCREMENT, " + "ProteinId NOT NULL, " + "GOId NOT NULL, " + "UNIQUE (ProteinId, GOId)" + "FOREIGN KEY(ProteinId) REFERENCES Protein(Id))"); There is one such statement for each table. This is bad, because if I decide to change the schema later (which will most certainly happen), I will have to change it three times, which begs for errors, so I would like a way to make sure that the other tables have the same schema without explicitly writing it again. I can use: statement .executeUpdate("CREATE TABLE AnnotationsBiologicalProcess AS SELECT * FROM AnnotationsMolecularFunction"); to create the other tables with the same columns, but the constraints are not aplied. I could of course just generate the same query-string three times with different table-names in Java, but I would like to know if there is an SQL-way of achieving this.

    Read the article

  • Machine Learning Algorithm for Peer-to-Peer Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event as defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK for certain classifications (albeit undesirable) but false negatives would be totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • "Ambigous type variable" error when defining custom "read" function

    - by Tener
    While trying to compile the following code, which is enhanced version of read build on readMay from Safe package. readI :: (Typeable a, Read a) => String -> a readI str = case readMay str of Just x -> x Nothing -> error ("Prelude.read failed, expected type: " ++ (show (typeOf > (undefined :: a))) ++ "String was: " ++ str) I get an error from GHC: WavefrontSimple.hs:54:81: Ambiguous type variable `a' in the constraint: `Typeable a' arising from a use of `typeOf' at src/WavefrontSimple.hs:54:81-103 Probable fix: add a type signature that fixes these type variable(s)` I don't understand why. What should be fixed to get what I meant? EDIT: Ok, so the solution to use ScopedTypeVariables and forall a in type signature works. But why the following produces very similar error to the one above? The compiler should infer the right type since there is asTypeOf :: a -> a -> a used. readI :: (Typeable a, Read a) => String -> a readI str = let xx = undefined in case readMay str of Just x -> x `asTypeOf` xx Nothing -> error ("Prelude.read failed, expected type: " ++ (show (typeOf xx)) ++ "String was: " ++ str)

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Data validate tools (ETL tools) for SQL server

    - by Stan
    I have some data in Excel and need to import into database. Is there any tool that can validate and maybe clean the data? Does Red Gate have such tool? The input will be Excel. Given table constraints, eg. CHECK, UNIQUE KEY, datetime format, NOT NULL. Desire output should be as least shows which lines are having problems, and then fix some trivial error automatically, like fill in default value for NULL columns, automatically correct datetime format. I know using Python can build such a script. But just wonder what's the popular way to do this. Thanks.

    Read the article

  • Create Generic method constraining T to an Enum

    - by johnc
    I'm building a function to extend the Enum.Parse concept that allows a default value to be parsed in case that an Enum value is not found Is case insensitive So I wrote the following public static T GetEnumFromString<T>(string value, T defaultValue) where T : Enum { if (string.IsNullOrEmpty(value)) return defaultValue; foreach (T item in Enum.GetValues(typeof(T))) { if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item; } return defaultValue; } I am getting a Error Constraint cannot be special class 'System.Enum' Fair enough, but is there a workaround to allow a Generic Enum, or am I going to have to mimic the Parse function and pass a type as an attribute, which forces the ugly boxing requirement to your code. EDIT All suggestions below have been greatly appreciated, thanks Have settled on (I've left the loop to maintain case insensitivity - I am usng this when parsing XML) public static class EnumUtils { public static T ParseEnum<T>(string value, T defaultValue) where T : struct, IConvertible { if (!typeof(T).IsEnum) throw new ArgumentException("T must be an enumerated type"); if (string.IsNullOrEmpty(value)) return defaultValue; foreach (T item in Enum.GetValues(typeof(T))) { if (item.ToString().ToLower().Equals(value.Trim().ToLower())) return item; } return defaultValue; } }

    Read the article

  • Tuples of unknown size/parameter types

    - by myahya
    I need to create a map, from integers to sets of tuples, the tuples in a single set have the same size. The problem is that the size of a tuple and its parameter types can be determined at runtime, not compile time. I am imagining something like: std::map<int, std::set<boost::tuple> > but not exctly sure how to exactly do this, bossibly using pointers. The purpose of this is to create temporary relations (tables), each with a unique identifier (key), maybe you have another approach.

    Read the article

  • Database Design Primay Key, ID vs String

    - by LnDCobra
    Hi, I am currently planning to develop a music streaming application. And i am wondering what would be better as a primary key in my tables on the server. An ID int or a Unique String. Methods 1: Songs Table: SongID(int), Title(string), Artist*(string), Length(int), Album*(string) Genre Table Genre(string), Name(string) SongGenre: SongID*(int), Genre*(string) Method 2 Songs Table: SongID(int), Title(string), ArtistID*(int), Length(int), AlbumID*(int) Genre Table GenreID(int), Name(string) SongGenre: SongID*(int), GenreID*(int) Key: Bold = Primary Key, Field* = Foreign Key I'm currently designing using method 2 as I believe it will speed up lookup performance and use less space as an int takes a lot less space then a string. Is there any reason this isn't a good idea? Is there anything I should be aware of?

    Read the article

  • See return value in C#

    - by Snake
    Hi, Consider the following piece of code: As you can see we are on line 28. Is there any way to see the return value of the function at this point, without letting the code return to the caller function? Foo.Bar() is a function call which generates a unique path (for example). So it's NOT constant. In VB.NET it's possible by entering the function's name in the Watch, which will then threat it as a variable. But in C# this is not possible, any other tips? PS: rewriting is not an option.

    Read the article

  • SQL model optimization question

    - by supermogx
    I need to keep track of number of "hits" on a particular item in a DB. The thing is that the "hits" should stay unique with a user ID, so if a user hits the item 3 times, it should still count for a hit of 1. Also, I need to display the total number of hits for a particular item. Is there a better way than to store each hits for each items by each users in a separate table? Would keeping the user ID in a string separated by commas a better and efficient way?

    Read the article

  • Django models: Use multiple values as a key?

    - by Rosarch
    Here is a simple model: class TakingCourse(models.Model): course = models.ForeignKey(Course) term = models.ForeignKey(Term) Instead of Django creating a default primary key, I would like to use both course and term as the primary key - taken together, they uniquely identify a tuple. Is this allowed by Django? On a related note: I am trying to represent users taking courses in certain terms. Is there a better way to do this? class Course(models.Model): name = models.CharField(max_length=200) requiredFor = models.ManyToManyField(RequirementSubSet, blank=True) offeringSchool = models.ForeignKey(School) def __unicode__(self): return "%s at %s" % (self.name, self.offeringSchool) class MyUser(models.Model): user = models.ForeignKey(User, unique=True) takingReqSets = models.ManyToManyField(RequirementSet, blank=True) takingTerms = models.ManyToManyField(Term, blank=True) takingCourses = models.ManyToManyField(TakingCourse, blank=True) school = models.ForeignKey(School) class TakingCourse(models.Model): course = models.ForeignKey(Course) term = models.ForeignKey(Term) class Term(models.Model): school = models.ForeignKey(School) isPrimaryTerm = models.BooleanField()

    Read the article

  • OpenID like Stack Overflow

    - by eWolf
    I want to create an OpenID login with PHP just like it can be found on Stack Overflow. I know there are many questions for this, but mine is different. If I understood it correctly, every OpenID is defined by a unique URL. But: If I hit the Google button on the Stack Overflow login page, one generic URL is inserted in the text field. Is this the direct URL to the OpenID server? And if it is, how do I have to pass the URL to this class?

    Read the article

  • How to fetch message body and attachments in XML format from Lotus Domino server from linux using ph

    - by too
    Has anybody some information about accessing Lotus Domino server to fetch entire mail contents by http(s) requests from php linux server? The article by Andrei Kouvchinnikov describes well how to fetch message list in notes mail folders; after obtaining session id during login one can for example select top 100 messages by calling: https://your.server.domain/mail_db/mailbox.nsf/($Inbox)?ReadViewEntries&Start=1&Count=100 And this works perfectly. The arises when I am trying to get message contents (0A1DA5EEB7B65277C12576F50055D811 is an example message unique Id): https://your.server.domain/mail_db/mailbox.nsf/($Inbox)/0A1DA5EEB7B65277C12576F50055D811/?OpenDocument Such request in IE shows frameset with data hard to parse, in less common browsers like Opera it informs about unsupported browser. Ideally if it is possible to fetch notes message contents and all attachments by requesting it in the url, has anybody some information what request would it be? Link to Lotus web calls reference would be even more beneficial.

    Read the article

  • SQL Server Process Queue Race Condition

    - by William Edmondson
    I have an order queue that is accessed by multiple order processors through a stored procedure. Each processor passes in a unique ID which is used to lock the next 20 orders for its own use. The stored procedure then returns these records to the order processor to be acted upon. There are cases where multiple processors are able to retrieve the same 'OrderTable' record at which point they try to simultaneously operate on it. This ultimately results in errors being thrown later in the process. My next course of action is to allow each processor grab all available orders and just round robin the processors but I was hoping to simply make this section of code thread safe and allow the processors to grab records whenever they like. So Explicitly - Any idea why I am experiencing this race condition and how I can solve the problem. BEGIN TRAN UPDATE OrderTable WITH ( ROWLOCK ) SET ProcessorID = @PROCID WHERE OrderID IN ( SELECT TOP ( 20 ) OrderID FROM OrderTable WITH ( ROWLOCK ) WHERE ProcessorID = 0) COMMIT TRAN SELECT OrderID, ProcessorID, etc... FROM OrderTable WHERE ProcessorID = @PROCID

    Read the article

  • Jquery ajax request error callback is called instead of success even after response recieved from server

    - by Muhammad Tahir Butt
    I am using jquery ajax funtion to get some content from my webservice. Response from the server is received but every time error callback is called instead of success callback. And this error is returned in xhr.error: function (){if(l){var t=l.length;(function i(t){x.each(t,function(t,n){var r=x.type(n);"function"===r?e.unique&&p.has(n)||l.push(n):n&&n.length&&"string"!==r&&i(n)})})(arguments),n?o=l.length:r&&(s=t,c(r))}return this} Here is the screenshot of response from server: and here is the code i am using to make the request: function abcdef() { $.ajax({ url: "http://192.168.61.129:8000/get-yt-access-token/", type: "GET", contentType:"application/json", error: function(xhr, textStatus, errorThrown){ alert("its error! " + xhr.error); }, success: function(data){ alert(data); } }); }

    Read the article

  • Extend LDAP Membership to append a prefix/sufix to the username

    - by Romias
    Our web applications are using LDAP Membership Provider to authenticate and register users in Active Directory. In order to allow users to provide usernames that exist in other applications, we need to add a prefix in its username and it should be as transparent and painless as possible. What I need is a way to extend the LDAP Membership Provider to be able to add (concatenate) a prefix to the username just before Membership authenticate or register it. For example, if user input is "JohnS" in application 1... I want to authenticate: "App1_JohnS". How could I extend the membership to accomplish this? Any idea what is triggered just before authenticate and register (create user)? Update: Each web app has an "OU" in AD where create users to and authenticate from. But as it is just ONE Active Directory Controller the usernames must be unique. We need to solve this issue using Membership providers and not adding more ADs.

    Read the article

  • Asynchronous callback - gwt

    - by sprasad12
    Hi, I am using gwt and postgres for my project. On the front end i have few widgets whose data i am trying to save on to tables at the back-end when i click on "save project" button(this also takes the name for the created project). In the asynchronous callback part i am setting more than one table. But it is not sending the data properly. I am getting the following error: org.postgresql.util.PSQLException: ERROR: insert or update on table "entitytype" violates foreign key constraint "entitytype_pname_fkey" Detail: Key (pname)=(Project Name) is not present in table "project". But when i do the select statement on project table i can see that the project name is present. Here is how the callback part looks like: oksave.addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { if(erasync == null) erasync = GWT.create(EntityRelationService.class); AsyncCallback<Void> callback = new AsyncCallback<Void>(){ @Override public void onFailure(Throwable caught) { } @Override public void onSuccess(Void result){ } }; erasync.setProjects(projectname, callback); for(int i = 0; i < boundaryPanel.getWidgetCount(); i++){ top = new Integer(boundaryPanel.getWidget(i).getAbsoluteTop()).toString(); left = new Integer(boundaryPanel.getWidget(i).getAbsoluteLeft()).toString(); if(widgetTitle.startsWith("ATTR")){ type = "regular"; erasync.setEntityAttribute(name1, name, type, top, left, projectname, callback); } else{ erasync.setEntityType(name, top, left, projectname, callback); } } } Question: Is it wrong to set more than one in the asynchronous callback where all the other tables are dependent on a particular table? when i say setProjects in the above code isn't it first completed and then moved on to the next one? Please any input will be greatly appreciated. Thank you.

    Read the article

  • Django model manager didn't work with related object when I do aggregated query

    - by Satoru.Logic
    Hi, all. I'm having trouble doing an aggregation query on a many-to-many related field. Let's begin with my models: class SortedTagManager(models.Manager): use_for_related_fields = True def get_query_set(self): orig_query_set = super(SortedTagManager, self).get_query_set() # FIXME `used` is wrongly counted return orig_query_set.distinct().annotate( used=models.Count('users')).order_by('-used') class Tag(models.Model): content = models.CharField(max_length=32, unique=True) creator = models.ForeignKey(User, related_name='tags_i_created') users = models.ManyToManyField(User, through='TaggedNote', related_name='tags_i_used') objects_sorted_by_used = SortedTagManager() class TaggedNote(models.Model): """Association table of both (Tag , Note) and (Tag, User)""" note = models.ForeignKey(Note) # Note is what's tagged in my app tag = models.ForeignKey(Tag) tagged_by = models.ForeignKey(User) class Meta: unique_together = (('note', 'tag'),) However, the value of the aggregated field used is only correct when the model is queried directly: for t in Tag.objects.all(): print t.used # this works correctly for t in user.tags_i_used.all(): print t.used #prints n^2 when it should give n Would you please tell me what's wrong with it? Thanks in advance.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >