Search Results

Search found 24117 results on 965 pages for 'write'.

Page 789/965 | < Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • python-iptables: Cryptic error when allowing incoming TCP traffic on port 1234

    - by Lucas Kauffman
    I wanted to write an iptables script in Python. Rather than calling iptables itself I wanted to use the python-iptables package. However I'm having a hard time getting some basic rules setup. I wanted to use the filter chain to accept incoming TCP traffic on port 1234. So I wrote this: import iptc chain = iptc.Chain(iptc.TABLE_FILTER,"INPUT") rule = iptc.Rule() target = iptc.Target(rule,"ACCEPT") match = iptc.Match(rule,'tcp') match.dport='1234' rule.add_match(match) rule.target = target chain.insert_rule(rule) However when I run this I get this thrown back at me: Traceback (most recent call last): File "testing.py", line 9, in <module> chain.insert_rule(rule) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1133, in insert_rule self.table.insert_entry(self.name, rbuf, position) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1166, in new obj.refresh() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1230, in refresh self._free() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1224, in _free self.commit() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1219, in commit raise IPTCError("can't commit: %s" % (self.strerror())) iptc.IPTCError: can't commit: Invalid argument Exception AttributeError: "'NoneType' object has no attribute 'get_errno'" in <bound method Table.__del__ of <iptc.Table object at 0x7fcad56cc550>> ignored Does anyone have experience with python-iptables that could enlighten on what I did wrong?

    Read the article

  • JavaScript: count minimal length of characters in text, ignoring special codes inside

    - by ilnur777
    I want to ignore counting the length of characters in the text if there are special codes inside in textarea. I mean not to count the special codes characters in the text. I use special codes to define inputing smileys in the text. I want to count only the length of the text ignoring special code. Here is my approximate code I tried to write, but can't let it work: // smileys // ======= function smileys(){ var smile = new Array(); smile[0] = "[:rolleyes:]"; smile[1] = "[:D]"; smile[2] = "[:blink:]"; smile[3] = "[:unsure:]"; smile[4] = "[8)]"; smile[5] = "[:-x]"; return(smile); } // symbols length limitation // ========================= function minSymbols(field){ var get_smile = smileys(); var text = field.value; for(var i=0; i<get_smile.length; i++){ for(var j=0; j<(text.length); j++){ if(get_smile[i]==text[j]){ text = field.value.replace(get_smile[i],""); } } } if(text.length < 50){ document.getElementById("saveB").disabled=true; } else { document.getElementById("saveB").disabled=false; } } How the script should be in order to let it work? Thank you!

    Read the article

  • Avoiding anemic domain model - a real example

    - by cbp
    I am trying to understand Anemic Domain Models and why they are supposedly an anti-pattern. Here is a real world example. I have an Employee class, which has a ton of properties - name, gender, username, etc public class Employee { public string Name { get; set; } public string Gender { get; set; } public string Username { get; set; } // Etc.. mostly getters and setters } Next we have a system that involves rotating incoming phone calls and website enquiries (known as 'leads') evenly amongst sales staff. This system is quite complex as it involves round-robining enquiries, checking for holidays, employee preferences etc. So this system is currently seperated out into a service: EmployeeLeadRotationService. public class EmployeeLeadRotationService : IEmployeeLeadRotationService { private IEmployeeRepository _employeeRepository; // ...plus lots of other injected repositories and services public void SelectEmployee(ILead lead) { // Etc. lots of complex logic } } Then on the backside of our website enquiry form we have code like this: public void SubmitForm() { var lead = CreateLeadFromFormInput(); var selectedEmployee = Kernel.Get<IEmployeeLeadRotationService>() .SelectEmployee(lead); Response.Write(employee.Name + " will handle your enquiry. Thanks."); } I don't really encounter many problems with this approach, but supposedly this is something that I should run screaming from because it is an Anemic Domain Model. But for me its not clear where the logic in the lead rotation service should go. Should it go in the lead? Should it go in the employee? What about all the injected repositories etc that the rotation service requires - how would they be injected into the employee, given that most of the time when dealing with an employee we don't need any of these repositories?

    Read the article

  • Simulating O_NOFOLLOW (2): Is this other approach safe?

    - by Daniel Trebbien
    As a follow-up question to this one, I thought of another approach which builds off of @caf's answer for the case where I want to append to file name and create it if it does not exist. Here is what I came up with: Create a temporary directory with mode 0700 in a system temporary directory on the same filesystem as file name. Create an empty, temporary, regular file (temp_name) in the temporary directory (only serves as placeholder). Open file name for reading only, just to create it if it does not exist. The OS may follow name if it is a symbolic link; I don't care at this point. Make a hard link to name at temp_name (overwriting the placeholder file). If the link call fails, then exit. (Maybe someone has come along and removed the file at name, who knows?) Use lstat on temp_name (now a hard link). If S_ISLNK(lst.st_mode), then exit. open temp_name for writing, append (O_WRONLY | O_APPEND). Write everything out. Close the file descriptor. unlink the hard link. Remove the temporary directory. (All of this, by the way, is for an open source project that I am working on. You can view the source of my implementation of this approach here.) Is this procedure safe against symbolic link attacks? For example, is it possible for a malicious process to ensure that the inode for name represents a regular file for the duration of the lstat check, then make the inode a symbolic link with the temp_name hard link now pointing to the new, symbolic link? I am assuming that a malicious process cannot affect temp_name.

    Read the article

  • Modifying my website to allow anonymous comments

    - by David
    I write the code for my own website as an educational/fun exercise. Right now part of the website is a blog (like every other site out there :-/) which supports the usual basic blog features, including commenting on posts. But I only have comments enabled for logged-in users; I want to alter the code to allow anonymous comments - that is, I want to allow people to post comments without first creating a user account on my site, although there will still be some sort of authentication involved to prevent spam. Question: what information should I save for anonymous comments? I'm thinking at least display name and email address (for displaying a Gravatar), and probably website URL because I eventually want to accept OpenID as well, but would anything else make sense? Other question: how should I modify the database to store this information? The schema I have for the comment table is currently comment_id smallint(5) // The unique comment ID post_id smallint(5) // The ID of the post the comment was made on user_id smallint(5) // The ID of the user account who made the comment comment_subject varchar(128) comment_date timestamp comment_text text Should I add additional fields for name, email address, etc. to the comment table? (seems like a bad idea) Create a new "anonymous users" table? (and if so, how to keep anonymous user ids from conflicting with regular user ids) Or create fake user accounts for anonymous users in my existing users table? Part of what's making this tricky is that if someone tries to post an anonymous comment using an email address (or OpenID) that's already associated with an account on my site, I'd like to catch that and prompt them to log in.

    Read the article

  • Radix Sort in Python [on hold]

    - by Steven Ramsey
    I could use some help. How would you write a program in python that implements a radix sort? Here is some info: A radix sort for base 10 integers is a based on sorting punch cards, but it turns out the sort is very ecient. The sort utilizes a main bin and 10 digit bins. Each bin acts like a queue and maintains its values in the order they arrive. The algorithm begins by placing each number in the main bin. Then it considers the ones digit for each value. The rst value is removed and placed in the digit bin corresponding to the ones digit. For example, 534 is placed in digit bin 4 and 662 is placed in the digit bin 2. Once all the values in the main bin are placed in the corresponding digit bin for ones, the values are collected from bin 0 to bin 9 (in that order) and placed back in the main bin. The process continues with the tens digit, the hundreds, and so on. After the last digit is processed, the main bin contains the values in order. Use randint, found in random, to create random integers from 1 to 100000. Use a list comphrension to create a list of varying sizes (10, 100, 1000, 10000, etc.). To use indexing to access the digits rst convert the integer to a string. For this sort to work, all numbers must have the same number of digits. To zero pad integers with leading zeros, use the string method str.zfill(). Once main bin is sorted, convert the strings back to integers. I'm not sure how to start this, Any help is appreciated. Thank you.

    Read the article

  • Destructuring assignment in JavaScript

    - by Anders Rune Jensen
    As can be seen in the Mozilla changlog for JavaScript 1.7 they have added destructuring assignment. Sadly I'm not very fond of the syntax (why write a and b twice?): var a, b; [a, b] = f(); Something like this would have been a lot better: var [a, b] = f(); That would still be backwards compatible. Python-like destructuring would not be backwards compatible. Anyway the best solution for JavaScript 1.5 that I have been able to come up with is: function assign(array, map) { var o = Object(); var i = 0; $.each(map, function(e, _) { o[e] = array[i++]; }); return o; } Which works like: var array = [1,2]; var _ = assign[array, { var1: null, var2: null }); _.var1; // prints 1 _.var2; // prints 2 But this really sucks because _ has no meaning. It's just an empty shell to store the names. But sadly it's needed because JavaScript doesn't have pointers. On the plus side you can assign default values in the case the values are not matched. Also note that this solution doesn't try to slice the array. So you can't do something like {first: 0, rest: 0}. But that could easily be done, if one wanted that behavior. What is a better solution?

    Read the article

  • ASP.NET - I am generating an .XLS file with a DLL, how do I grant permissions for writing to file? (

    - by hamlin11
    I'm generating an .XLS file with a DLL (Excel Library http://code.google.com/p/excellibrary/) I've added this DLL as a reference to my project. The code to save the .XLS to disk is running, but it's encountering a permissions issue. I've attempted to set full access for IUSRS, Network Service, and Everyone just to see if I could get it working, and none of these seems to make a difference. Here's where I'm trying to write the file: c:/temp/test1.xls Here's the error: [SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) +2103 System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) +138 System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) +89 System.IO.File.Open(String path, FileMode mode, FileAccess access, FileShare share) +58 ExcelLibrary.Office.CompoundDocumentFormat.CompoundDocument.Create(String file) +88 ExcelLibrary.Office.Excel.Workbook.Save(String file) +73 CHC_Reports.LitAnalysis.CreateSpreadSheet_Click(Object sender, EventArgs e) in C:\Users\brian\Desktop\Enterprise Manager\CHC_Reports\LitAnalysis.aspx.vb:19 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11041511 System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11041050 System.Web.UI.Page.ProcessRequest() +91 System.Web.UI.Page.ProcessRequest(HttpContext context) +240 ASP.litanalysis_aspx.ProcessRequest(HttpContext context) +52 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +599 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +171 Any idea what I need to do to diagnose the permissions issue and allow the file creation? Thanks.

    Read the article

  • return sql query in xml format in python

    - by Ramy
    When I first started working at the company that i work at now, I created a java application that would run batches of jasper-reports. In order to determine which parameters to use for each report in the set of reports, I run a sql query (on sqlserver). I wrote the application to take an xml file with a set of parameters for each report to be generated in the set. so, my process has become, effectively, three steps: run the sql query and return the results in XML format (using 'for XML auto') run the results of the sql query through an XSLT transformation so the xml is formatted in such a way that is friendly with the java application i wrote. run the java application with that final xml file As you can imagine, what I'd like to do is accomplish these steps in python, but i'm not quite sure how to get started. I know how to run an SQL query in Python. I see plenty of documentation about how to write your own xml document with Python. I even see documentation for xsl transformations in python. the big question is how to get the results of the sql query in XML through python. Any and all pointers would be very valuable. Thanks, _Ramy

    Read the article

  • What's your most controversial programming opinion?

    - by Jon Skeet
    This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately. The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently). So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions. Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

    Read the article

  • Using LinqExtender to make OData feed fails

    - by BurningIce
    A pretty simple question, has anyone here tried to make a OData feed based on a IQueryable created with LinqExtender? I have created a simple Linq-provider that supports Where, Select, OrderBy and Take and wanted to expose it as an OData Feed. I keep getting an error though, and the Exception is a NullReference with the following StackTrace at System.Data.Services.Serializers.Serializer.GetObjectKey(Object resource, IDataServiceProvider provider, String containerName) at System.Data.Services.Serializers.Serializer.GetUri(Object resource, IDataServiceProvider provider, ResourceContainer container, Uri absoluteServiceUri) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, Type expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__0.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) I've kinda narrowed it down to a issue where LinqExtender wraps every returned object, so that my object actually inherits itself - thats at least how it looks like in the debugger. These two queries are basicly the same. The first is the legacy-api where the OrderBy and Select is regular Linq to Objects. The second query is a "real" linq-provider made with LinqExtender. var db = CalendarDataProvider.GetCalendarEntriesByDate(DateTime.Now, DateTime.Now.AddMonths(1), Guid.Empty) .OrderBy(o => o.Title) .Select(o => new ODataCalendarEntry(o)); var query = new ODataCalendarEntryQuery() .Where(o => o.Start > DateTime.Now && o.End < DateTime.Now.AddMonths(1)) .OrderBy(o => o.Title); When returning db for the OData feed everything is fine, but returning query throws a NullRefenceException. I've tried all kind of tricks and even tried to project all the data into a new object like this, but still the same error return query.Select(o => new ODataCalendarEntry { Title = o.Title, Start = o.Start, End = o.End, Name = o.Name });

    Read the article

  • How to use a int2 database-field as a boolean in Java using JPA/Hibernate

    - by mg
    Hello... I write an application based on an already existing database (postgreSQL) using JPA and Hibernate. There is a int2-column (activeYN) in a table, which is used as a boolean (0 = false (inactive), not 0 = true (active)). In the Java application i want to use this attribute as a boolean. So i defined the attribute like this: @Entity public class ModelClass implements Serializable { /*..... some Code .... */ private boolean active; @Column(name="activeYN") public boolean isActive() { return this.active; } /* .... some other Code ... */ } But there ist an exception because Hibernate expects an boolean database-field and not an int2. Can i do this mapping i any way while using a boolean in java?? I have a possible solution for this, but i don't really like it: My "hacky"-solution is the following: @Entity public class ModelClass implements Serializable { /*..... some Code .... */ private short active_USED_BY_JPA; //short because i need int2 /** * @Deprecated this method is only used by JPA. Use the method isActive() */ @Column(name="activeYN") public short getActive_USED_BY_JPA() { return this.active_USED_BY_JPA; } /** * @Deprecated this method is only used by JPA. * Use the method setActive(boolean active) */ public void setActive_USED_BY_JPA(short active) { this.active_USED_BY_JPA = active; } @Transient //jpa will ignore transient marked methods public boolean isActive() { return getActive_USED_BY_JPA() != 0; } @Transient public void setActive(boolean active) { this.setActive_USED_BY_JPA = active ? -1 : 0; } /* .... some other Code ... */ } Are there any other solutions for this problem? The "hibernate.hbm2ddl.auto"-value in the hibernate configuration is set to "validate". (sorry, my english is not the best, i hope you understand it anyway)..

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • Adding a ServiceReference programmatically during async postback

    - by Oliver
    Is it possible to add a new ServiceReference instance to the ScriptManager on the Page during an asynchronous postback so that subsequently I can use the referenced web service through client side script? I'm trying to do this inside a UserControl that sits inside a Repeater, that's why adding the ScriptReference programmatically during Page_Load does not work here. EDIT 2: This is the code I call from my UserControl which does not do what I expect (adding the ServiceReference to the ScriptManager during the async postback): private void RegisterWebservice(Type webserviceType) { var scm = ScriptManager.GetCurrent(Page); if (scm == null) throw new InvalidOperationException("ScriptManager needed on the Page!"); scm.Services.Add(new ServiceReference("~/" + webserviceType.Name + ".asmx")); } My goal is for my my UserControl to be as unobtrusive to the surrounding application as possible; otherwise I would have to statically define the ServiceReference in a ScriptManagerProxy on the containing Page, which is not what I want. EDIT: I must have been tired when I wrote this post... because I meant to write ServiceReference not ScriptReference. Updated the text above accordingly. Now I have: <asp:ScriptManagerProxy runat="server" ID="scmProxy"> <Services> <asp:ServiceReference Path="~/UsefulnessWebService.asmx" /> </Services> </asp:ScriptManagerProxy> but I want to register the webservice in the CodeBehind.

    Read the article

  • MySQL GIS and Spatial Extensions - how to map regions and query against them

    - by chibineku
    I am trying to make a smartphone app which will return a list of users within a certain proximity, say 100m. It's easy to get the coordinates of my BlackBerry and write them to a database, but in order to return a list of other users within 100m, I need to pull every other record from the database and compare the distance between the two points, checking to see if it's within range, before outputting that user's information. This is going to be time consuming if there are many users involved. So I would like to map areas (countries, cities, I'm not yet sure of the resolution I'll need) so that I can first target a smaller subset of all users. This will save on processing time. I have read the basics of GIS and spatial querying on the mysql website but to be honest the query is over my head and I hate copying and pasting code without understanding it. Plus it only checks for proximity - I want to first check if a coordinate falls within a certain area. Does anyone have any experience of such matters and feel like giving me some pointers? Resources such as any preexisting databases of points describing countries as polygons would be really helpful too. Many thanks to anyone who takes the time :)

    Read the article

  • How to modify/replace option set file when building from command line?

    - by Heinrich Ulbricht
    I build packages from a batch file using commands like: msbuild ..\lib\Package.dproj /target:Build /p:config=%1 The packages' settings are dependent on an option set: <Import Project="..\optionsets\COND_Defined.optset" Condition="'$(Base)'!='' And Exists('..\optionsets\COND_Defined.optset')"/> This option set defines a conditional symbol many of my packages depend on. The file looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <DCC_Define>CONDITION;$(DCC_Define)</DCC_Define> </PropertyGroup> <ProjectExtensions> <Borland.Personality>Delphi.Personality.12</Borland.Personality> <Borland.ProjectType>OptionSet</Borland.ProjectType> <BorlandProject> <Delphi.Personality/> </BorlandProject> <ProjectFileVersion>12</ProjectFileVersion> </ProjectExtensions> </Project> Now I need two builds: one with the condition defined and one without. My attack vector would be the option set file. I have some ideas on what to do: write a program which modifies the option set file, run this before batch build fiddle with the project files and modify the option set path to contain an environment variable, then have different option sets in different locations But before starting to reinvent the wheel I'd like to ask how you would tackle this task? Maybe there are already means meant to support such a case (like certain command line switches, things I could configure in Delphi or batch file magic).

    Read the article

  • jQuery ajax call failing with undefined error

    - by Groo
    My jQuery ajax call is failing with an undefined error. My js code looks like this: $.ajax({ type: "POST", url: "Data/RealTime.ashx", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", timeout: 15000, dataFilter: function(data, type) { alert("RAW DATA: " + data + ", TYPE: "+ type); return data; }, error: function(xhr, textStatus, errorThrown) { alert("FAIL: " + xhr + " " + textStatus + " " + errorThrown); }, success: function(data) { alert("SUCCESS"); } }); My ajax source is a generic ASP.NET handler: [WebService(Namespace = "http://my.website.com")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class RealTime : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "application/json"; context.Response.Write("{ data: [1,2,3] }"); context.Response.End(); } public bool IsReusable { get { return false; } } } Now, if I return an empty object ("{ }") in my handler, the call will succeed. But when I return any other JSON object, the call fails. The dataFilter handler shows that I am receiving a correct object. Firebug shows the response as expected, and the JSON tab shows that the object is parsed correctly. So what could be the cause?

    Read the article

  • Do you use an architectural framework for Flex/AIR development?

    - by Christophe Herreman
    Given that Flex is still a relatively young technology, there are already a bunch of architectural frameworks available for Flex/AIR (and Flash) development, the main ones being Cairngorm and PureMVC. The amount of architectural frameworks is remarkable compared to other technologies. I was wondering how many of you use an architectural framework for Flex development. If so, why, or why not if you don't use any? To share my own experience and point of view: I have used Cairngorm (and ARP for Flash development) on a variety of projects and found that at times, we needed to write extra code just to fit into the framework, which obviously didn't feel right. Although I haven't used PureMVC on many occasions, I have the same gut feeling after looking at the examples applications. Architectural frameworks equal religion in some way. Most followers are convinced that their framework is THE framework and are not open or very skeptical when it comes to using other frameworks. (I also find myself hesitant and skeptical to check out new frameworks, but that is mainly because I would rather wait until the hype is over.) In conclusion, I'm thinking that it is better to have a sound knowledge of patterns and practices that you can apply in your application instead of choosing a framework and sticking to it. There is simply no right or wrong and I don't believe that there will ever be a framework that is considered the holy grail.

    Read the article

  • Disassembler that tracks what value is where

    - by Martin C. Martin
    So lately I've been looking at the disassembly of my C++ code, and having to manually track what's in each register, like this: 95: 48 8b 16 mov (%rsi),%rdx ; %rdx = raggedCross.sink 98: 48 8b 42 38 mov 0x38(%rdx),%rax ; %rax = sink.table 9c: 8b 4a 10 mov 0x10(%rdx),%ecx ; %ecx = sink.baseCol 9f: 48 8b 70 50 mov 0x50(%rax),%rsi ; %rsi = table.starts a3: 89 c8 mov %ecx,%eax ; %eax = baseCol a5: 83 c1 1c add $0x1c,%ecx ; %ecx = baseCol + 1 And so on. The comments are mine, added by hand, from looking up the offset of various fields (e.g. sink, table, baseCol, starts) in the C++ classes. It's straight forward to do, but tedius and time consuming: the perfect thing for a program to be doing. gdb seems to know the offset of various fields within a struct: I can do &((Table *)0x1200)-starts and it tells the the right address. So, this information is around. Is there some disassembler that can use this info to annotate the code for me? Failing that, I could write my own. Where does gdb get the offsets?

    Read the article

  • Putting a C++ Vector as a Member in a Class that Uses a Memory Pool

    - by Deep-B
    Hey, I've been writing a multi-threaded DLL for database access using ADO/ODBC for use with a legacy application. I need to keep multiple database connections for each thread, so I've put the ADO objects for each connection in an object and thinking of keeping an array of them inside a custom threadInfo object. Obviously a vector would serve better here - I need to delete/rearrange objects on the go and a vector would simplify that. Problem is, I'm allocating a heap for each thread to avoid heap contention and stuff and allocating all my memory from there. So my question is: how do I make the vector allocate from the thread-specific heap? (Or would it know internally to allocate memory from the same heap as its wrapper class - sounds unlikely, but I'm not a C++ guy) I've googled a bit and it looks like I might need to write an allocator or something - which looks like so much of work I don't want. Is there any other way? I've heard vector uses placement-new for all its stuff inside, so can overloading operator new be worked into it? My scant knowledge of the insides of C++ doesn't help, seeing as I'm mainly a C programmer (even that - relatively). It's very possible I'm missing something elementary somewhere. If nothing easier comes up - I might just go and do the array thing, but hopefully it won't come to that. I'm using MS-VC++ 6.0 (hey, it's rude to laugh! :-P ). Any/all help will be much appreciated.

    Read the article

  • Is It Incorrect to Make Domain Objects Aware of The Data Access Layer?

    - by Noah Goodrich
    I am currently working on rewriting an application to use Data Mappers that completely abstract the database from the Domain layer. However, I am now wondering which is the better approach to handling relationships between Domain objects: Call the necessary find() method from the related data mapper directly within the domain object Write the relationship logic into the native data mapper (which is what the examples tend to do in PoEAA) and then call the native data mapper function within the domain object. Either it seems to me that in order to preserve the 'Fat Model, Skinny Controller' mantra, the domain objects have to be aware of the data mappers (whether it be their own or that they have access to the other mappers in the system). Additionally it seems that Option 2 unnecessarily complicates the data access layer as it creates table access logic across multiple data mappers instead of confining it to a single data mapper. So, is it incorrect to make the domain objects aware of the related data mappers and to call data mapper functions directly from the domain objects? Update: These are the only two solutions that I can envision to handle the issue of relations between domain objects. Any example showing a better method would be welcome.

    Read the article

  • strange behavior in python

    - by fsm
    The tags might not be accurate since I am not sure where the problem is. I have a module where I am trying to read some data from a socket, and write the results into a file (append) It looks something like this, (only relevant parts included) if __name__ == "__main__": <some init code> for line in file: t = Thread(target=foo, args=(line,)) t.start() while nThreads > 0: time.sleep(1) Here are the other modules, def foo(text): global countLock, nThreads countLock.acquire() nThreads += 1 countLock.release() """connect to socket, send data, read response""" writeResults(text, result) countLock.acquire() nThreads -= 1 countLock.release() def writeResults(text, result): """acquire file lock""" """append to file""" """release file lock""" Now here's the problem. Initially, I had a typo in the function 'foo', where I was passing the variable 'line' to writeResults instead of 'text'. 'line' is not defined in the function foo, it's defined in the main block, so I should have seen an error, but instead, it worked fine, except that the data was appended to the file multiple times, instead of being written just once, which is the required behavior, which I got when I fixed the typo. My question is, 1) Why didn't I get an error? 2) Why was the writeResults function being called multiple times?

    Read the article

  • How to work threading with ConcurrentQueue<T>.

    - by dboarman
    I am trying to figure out what the best way of working with a queue will be. I have a process that returns a DataTable. Each DataTable, in turn, is merged with the previous DataTable. There is one problem, too many records to hold until the final BulkCopy (OutOfMemory). So, I have determined that I should process each incoming DataTable immediately. Thinking about the ConcurrentQueue<T>...but I don't see how the WriteQueuedData() method would know to dequeue a table and write it to the database. For instance: public class TableTransporter { private ConcurrentQueue<DataTable> tableQueue = new ConcurrentQueue<DataTable>(); public TableTransporter() { tableQueue.OnItemQueued += new EventHandler(WriteQueuedData); // no events available } public void ExtractData() { DataTable table; // perform data extraction tableQueue.Enqueue(table); } private void WriteQueuedData(object sender, EventArgs e) { BulkCopy(e.Table); } } My first question is, aside from the fact that I don't actually have any events to subscribe to, if I call ExtractData() asynchronously will this be all that I need? Second, is there something I'm missing about the way ConcurrentQueue<T> functions and needing some form of trigger to work asynchronously with the queued objects?

    Read the article

  • how do i filter my lucene search results?

    - by Andrew Bullock
    Say my requirement is "search for all users by name, who are over 18" If i were using SQL, i might write something like: Select * from [Users] Where ([firstname] like '%' + @searchTerm + '%' OR [lastname] like '%' + @searchTerm + '%') AND [age] >= 18 However, im having difficulty translating this into lucene.net. This is what i have so far: var parser = new MultiFieldQueryParser({ "firstname", "lastname"}, new StandardAnalyser()); var luceneQuery = parser.Parse(searchterm) var query = FullTextSession.CreateFullTextQuery(luceneQuery, typeof(User)); var results = query.List<User>(); How do i add in the "where age = 18" bit? I've heard about .SetFilter(), but this only accepts LuceneQueries, and not IQueries. If SetFilter is the right thing to use, how do I make the appropriate filter? If not, what do I use and how do i do it? Thanks! P.S. This is a vastly simplified version of what I'm trying to do for clarity, my WHERE clause is actually a lot more complicated than shown here. In reality i need to check if ids exist in subqueries and check a number of unindexed properties. Any solutions given need to support this. Thanks

    Read the article

< Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >