Search Results

Search found 9634 results on 386 pages for 'proxy pattern'.

Page 336/386 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Tableview not updating correctly after adding person

    - by tazboy
    I have to be missing something simple here but it escapes me. After the user enters a new person to a mutable array I want to update the table. The mutable array is the datasource. I believe my issue lies within cellForRowAtIndexPath. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { TextFieldCell *customCell = (TextFieldCell *)[tableView dequeueReusableCellWithIdentifier:@"TextCellID"]; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"cell"]; if (indexPath.row == 0) { if (customCell == nil) { NSArray *nibObjects = [[NSBundle mainBundle] loadNibNamed:@"TextFieldCell" owner:nil options:nil]; for (id currentObject in nibObjects) { if ([currentObject isKindOfClass:[TextFieldCell class]]) customCell = (TextFieldCell *)currentObject; } } customCell.nameTextField.delegate = self; cell = customCell; } else { if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:@"cell"]; cell.textLabel.text = [[self.peopleArray objectAtIndex:indexPath.row-1] name]; NSLog(@"PERSON AT ROW %d = %@", indexPath.row-1, [[self.peopleArray objectAtIndex:indexPath.row-1] name]); NSLog(@"peopleArray's Size = %d", [self.peopleArray count]); } } return cell; } When I first load the view everything is great. This is what prints: PERSON AT ROW 0 = Melissa peopleArray's Size = 2 PERSON AT ROW 1 = Dave peopleArray's Size = 2 After I add someone to that array I get this: PERSON AT ROW 1 = Dave peopleArray's Size = 3 PERSON AT ROW 2 = Tom peopleArray's Size = 3 When I add a second person I get: PERSON AT ROW 2 = Tom peopleArray's Size = 4 PERSON AT ROW 3 = Ralph peopleArray's Size = 4 Why is not printing everyone in the array? This pattern continues and it only ever prints two people, and it's always the last two people. What the heck am I missing?

    Read the article

  • unable to implement HTTP Tunneling correctly in order to enable Java rmi calls over internet(and und

    - by Lokesh Kumar
    in my previous question :-How to Setup RMI Server under(NAT/ISP) Now,i m able to start my RMI server by Installing apache Tomcat 6.0 server. i have also installed servlet programs into apache Tomcat server in order to enable HTTP tunneling. my servlet codes:- (1) [SimplifiedServletHandler.java][2] (2) [ServletForwardCommand.java][3] these servlets resides inside :- C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\examples\WEB-INF\classes\ one more thing that i hv added to my CalcultorClient.java program:- try { RMISocketFactory. setSocketFactory(new sun.rmi.transport.proxy .RMIHttpToCGISocketFactory( )); }catch (IOException ignored) { System.out.println("Error :- ignored.getMessage()"); } But,when i try to make client connect with server(under ISP/NAT) i get the following Exception :- RemoteException java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.IOException: HTTP request failed i don't know the correct reason behind this Exception.. but,i think that i haven't installed or invoke my servlet programs properly on server side. so,can anybody tell me the correct reason behind this error/Exception????? and if u think that it is servlet problem then tell me the correct procedure to run my serlvet program inside tomcat server.

    Read the article

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • Factory Girl: Automatically assigning parent objects

    - by Ben Scheirman
    I'm just getting into Factory Girl and I am running into a difficulty that I'm sure should be much easier. I just couldn't twist the documentation into a working example. Assume I have the following models: class League < ActiveRecord::Base has_many :teams end class Team < ActiveRecord::Base belongs_to :league has_many :players end class Player < ActiveRecord::Base belongs_to :team end What I want to do is this: team = Factory.build(:team_with_players) and have it build up a bunch of players for me. I tried this: Factory.define :team_with_players, :class => :team do |t| t.sequence {|n| "team-#{n}" } t.players {|p| 25.times {Factory.build(:player, :team => t)} } end But this fails on the :team=>t section, because t isn't really a Team, it's a Factory::Proxy::Builder. I have to have a team assigned to a player. In some cases I want to build up a League and have it do a similar thing, creating multiple teams with multiple players. What am I missing?

    Read the article

  • GRID is not properly rendered in ExtJS 4 by using Store

    - by user548543
    Here is the Src code for HTML file <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4 /strict.dtd"> <html> <head> <title>MVC Architecture</title> <link rel="stylesheet" type="text/css" href="/bh/extjs/resources/css/ext-all.css" /> <script type="text/javascript" src="extjs/ext-debug.js"></script> <script type="text/javascript" src="Main.js"></script> </head> <body> </body> </html> File path: /bh/Main.js [Main File] Ext.require('Ext.container.Viewport'); Ext.application({ name: 'App', appFolder: 'app', controllers: ['UserController'], launch: function() { Ext.create('Ext.container.Viewport', { layout: 'border', items: [ { xtype: 'userList' } ] }); } }); File path: /app/controller/UserController.js [Controller] Ext.define('App.controller.UserController',{ extend: 'Ext.app.Controller', stores: ['UserStore'], models:['UserModel'], views:['user.UserList'], init: function() { this.getUserStoreStore().load(); } }); File path: /app/store/UserStore.js Ext.define('App.store.UserStore', { extend: 'Ext.data.Store', model: 'App.model.UserModel', proxy: { type: 'ajax', url: 'app/data/contact.json' } }); File path: /app/model/UserModel.js [Model] Ext.define('App.model.UserModel',{ extends:'Ext.data.Model', fields:[ {name: 'name', type: 'string'}, {name: 'age', type: 'string'}, {name: 'phone', type: 'string'}, {name: 'email', type: 'string'} ] }); File path: /app/view/UserList.js [View] Ext.define('App.view.user.UserList' ,{ extend: 'Ext.grid.Panel', alias:'widget.userList', title:'Contacts', region:'center', resizable:true, initComponent: function() { this.store = 'UserStore'; this.columns = [ {text: 'Name',flex:1,sortable: true,dataIndex: 'name'}, {text: 'Age',flex:1,sortable: true,dataIndex: 'age'}, {text: 'Phone',flex:1,sortable: true,dataIndex: 'phone'}, {text: 'Email',flex:1,sortable: true,dataIndex: 'email'} ]; this.callParent(arguments); } }); In fire bug it shows the JSON response as follows: [{ "name": "Aswini", "age": "32", "phone": "555-555-5555", "email": "[email protected]" }] Why the Data has not been displayed although I have a valid json response. Please help!!!

    Read the article

  • I like the way they Design/Architecture it but how do I implement this

    - by Rachel
    Summary: I have different components on homepage and each components shows some promotion to the user. I have Cart as one Component and depending upon content of the cart promotion are show. I have to track user online activities and send that information to Omniture for Report Generation. Now my components are loaded asynchronously basically are loaded when AjaxRequest is fired up and so there is not fix pattern or rather information on when components will appear on the webpages. Now in order to pass information to Omniture I need to call track function on $(document).(ready) and append information for each components(7 parameters are required by Omniture for each component). So in the init:config function of each component am calling Omniture and passing paramters but now no. of Omniture calls is directly proportional to no. of Components on the webpage but this is not acceptable as each call to Omniture is very expensive. Now I am looking for a way where in I can club the information about 7 parameters and than make one Call to Omniture wherein I pass those information. Points to note is that I do not know when the components are loaded and so there is no pre-defined time or no. of components that would be loaded. The thing is am calling track function when document is ready but components are loaded after call to Omniture has been made and so my question is Q: How can I collect the information for all the components and than just make one call to Omniture to send those information ? As mentioned, I do not know when the components are loaded as they are done on the Ajax Request. Hope I am able to explain my challenge and would appreciate if some one can provide from Design/Architect Solutions for the Challenge.

    Read the article

  • Jmeter is not extracting correctly the value with the reg ex extractor.

    - by Chris
    Jmeter is not extracting correctly the value with the reg ex. When I play with this reg ex (NAME="token" \s value="([^"]+?)") in reg ex coach with the following html everything work fine but when adding the reg with a reg ex extrator to the request he doesn't found the value even if it's the same html in output. <HTML>< script type="text/javascript" > function dostuff(no, applicationID) { submitAction('APPS_NAME' , 'noSelected=' + no + '&applicationID=' + applicationID); }< /script> <FORM NAME="baseForm" ACTION="" METHOD="POST"> <input type="hidden" NAME="token" value="fc95985af8aa5143a7b1d4fda6759a74" > <div id="loader" align="center"> <div> <strong style="color: #003366;">Loading...</strong> </div> <img src="images/initial-loader.gif" align="top"/> </div> <BODY ONLOAD="dostuff('69489','test');"> From the reg ex extractor reference name: token Regex: (NAME="token" \s value="([^"]+?)") template : $2$ match no.:1 Default value: wrong-token The request following my the POST of the previous code is returning : POST data: token=wrong-token in the next request in the tree viewer. But when I check a the real request in a proxy the token is there. Note : I tried the reg ex without the bracket and doesn't worked either. Do anybody have a idea whats wrong here ? Why jmeter can't find my token with the reg ex extrator ?

    Read the article

  • How do I get the inserted id (or object) after an insert with the FormView/ObjectDataSource controls

    - by drs9222
    I have a series of classes that loosely fit the following pattern: public class CustomerInfo { public int Id {get; set;} public string Name {get; set;} } public class CustomerTable { public bool Insert(CustomerInfo info) { /*...*/ } public bool Update(CustomerInfo info) { /*...*/ } public CustomerInfo Get(int id) { /*...*/ } /*...*/ } After a successful insert the Insert method will set the Id property of the CustomerInfo object that was passed in. I've used these classes in several places and would prefer to altering them. Now I'm experimenting with writing an ASP.NET page for inserting and updating the records. I'm currently using the ObjectDataSource and FormView controls: <asp:ObjectDataSource TypeName="CustomerTable" DataObjectTypeName="CustomerInfo" InsertMethod="Insert" UpdateMethod="Update" SelectMethod="Get" /> I can successfully Insert and Update records. I would like to switch the FormView's mode from Insert to Edit after a successful insert. My first attempt was to switch the mode in the ItemInserted event. This of course did not work. I was using a QueryStringParameter for the id which of course wan't set when inserting. So, I switched to manually populating the InputParameters during the ObjectDataSource's Selecting event. The problem with this is I need to know the id of the newly inserted record which I can't find a good way to get. I understand that I can access the Insert method's return value, and out parameters in the ItemInserted event of course my method doesn't return the id using any of these methods. I can't find anyway to access the id or the CustomerInfo object that was inserted after the insert completes. The best I've been able to do is to save the CustomerInfo object in the ObjectDataSource's Inserting event. This feels like an odd way to do this. I figure that there must be a better way to do this and I'll kick myself when I see it. Any ideas?

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

  • Android AlertDialog clicking Positive and Negative just dismiss

    - by timmyg13
    Trying to have a sharedpreference where you click on it to restore default values, then an alert dialog comes up to ask if you are sure, but it is not doing anything, just dismissing the alertdialog. public class SettingsActivity extends PreferenceActivity implements OnSharedPreferenceChangeListener { /** Called when the activity is first created. */ @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); c = this; addPreferencesFromResource(R.xml.settings); SharedPreferences sp = PreferenceManager .getDefaultSharedPreferences(this); sp.registerOnSharedPreferenceChangeListener(this); datasource = new PhoneNumberDataSource(this); Preference restore = (Preference) findPreference("RESTORE"); restore.setOnPreferenceClickListener(new OnPreferenceClickListener() { @Override public boolean onPreferenceClick(Preference preference) { createDialog(); return false; } }); } void createDialog() { Log.v("createDialog", ""); FrameLayout fl = new FrameLayout(c); AlertDialog.Builder b = new AlertDialog.Builder(c).setView(fl); b.setTitle("Restore Defaults?"); b.setPositiveButton("Restore", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface d, int which) { Log.v("restore clicked:", ""); } }); b.setNegativeButton("Cancel", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface d, int which) { Log.v("cancel clicked:", ""); d.dismiss(); } }).create(); b.show(); } } Neither the "cancel clicked" nor "restore clicked" show in the log. I do get a weird "W/InputManagerService(64): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@450317b8" in the log.

    Read the article

  • Elegant Algorithm for Parsing Data Stream Into Record

    - by Matt Long
    I am interfacing with a hardware device that streams data to my app over Wifi. The data is streaming in just fine. The data contains a character header (DATA:) that indicates a new record has begun. The issues is that the data I receive doesn't necessarily fall on the header boundary, so I have to capture the data until what I've captured contains the header. Then, everything that precedes the header goes into the previous record and everything that comes after it goes into a new record. I have this working, but wondered if anyone has done this before and has a good computer-sciencey way to solve the problem. Here's what I do: Convert the NSData of the current read to an NSString Append the NSString to a placeholder string Check placeholder string for the header (DATA:). If the header is not there, just wait for the next read. If the header exists, append whatever precedes it to a previous record placeholder and hand that placeholder off to an array as a complete record that I can further parse into fields. Take whatever shows up after the header and place it in the record placeholder so that it can be appended to in the next read. Repeat steps 3 - 5. Let me know if you see any flaws with this or have a suggestion for a better way. Seems there should be some design pattern for this, but I can't think of one. Thanks.

    Read the article

  • Publishing artifacts with sources on archiva

    - by Palimondo
    At work I'm dipping my toes in managing project dependencies with maven. We use Apache Archiva (1.2.1) as a local repository and proxy. I'm adding artifact for open source project, that is not published on any public repository. I've learned that to publish the sources I should use the Classifier field on Upload artifact page. The sources are then listed alongside the jar and pom when I browse the repository. But when I update my maven dependencies I get only the jar and pom from the repository. I noticed that sources are also missing when the archiva proxies for me the downloads from other public repositories. I didn't find any configuration options in Archiva's admin pages to serve the sources... What am I missing? Update: I was missing the fact that artifact sources have to be downloaded manually. I.e. the maven client has to request them, which is controlled by command line option -DdownloadSources=true. Maven Integration for Eclipse has a preference setting to always download them as described in Resolving artifact sources. Archiva then serves the sources for local artifacts or proxies the request to remote repositories and caches the sources for future requests.

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • Can I use the auto-generated Linq-to-SQL entity classes in 'disconnected' mode?

    - by Gary McGill
    Suppose I have an automatically-generated Employee class based on the Employees table in my database. Now suppose that I want to pass employee data to a ShowAges method that will print out name & age for a list of employees. I'll retrieve the data for a given set of employees via a linq query, which will return me a set of Employee instances. I can then pass the Employee instances to the ShowAges method, which can access the Name & Age fields to get the data it needs. However, because my Employees table has relationships with various other tables in my database, my Employee class also has a Department field, a Manager field, etc. that provide access to related records in those other tables. If the ShowAges method were to invoke any of those methods, this would cause lots more data to be fetched from the database, on-demand. I want to be sure that the ShowAges method only uses the data I have already fetched for it, but I really don't want to have to go to the trouble of defining a new class which replicates the Employee class but has fewer methods. (In my real-world scenario, the class would have to be considerably more complex than the Employee class described here; it would have several 'joined' classes that do need to be populated, and others that don't). Is there a way to 'switch off' or 'disconnect' the Employees instances so that an attempt to access any property or related object that's not already populated will raise an exception? If not, then I assume that since this must be a common requirement, there might be an already-established pattern for doing this sort of thing?

    Read the article

  • passive view and display logic

    - by genesys
    Hi! In MVC and MVP and similar patterns there's often the approach of the "passive view" which is as stupid as possible. This should facilitate unit testing and create a clearer separation of view and model. I know that those patterns come in very different flavours and especially the understanding of MVP seems to differ from article to article. Therefore my question is not "how do i implement this pattern correctly". I want to improve view and model separation and go for better testability of the application. Therefore i'd like to go for a passive view. But my question is, where would you put logic that is clearly only view related? like a textviewer should scroll the text when the scrollbar is moved. would you put the logic for this into the Presenter? Let's say the textviewer has some extended functionality. like setting markings on textpassages. The logic for this makes clearly sense to be put into the Presenter. However, if it is mixed with all the 'direct' logic of the view (like scrolling the text) the Presenter could become very big, which is also not really a nice design. So my question is where to put display related logic of a passive view and what functionallity to mix in the Presenter. Thanks!

    Read the article

  • regex php: find everything in div

    - by Fifth-Edition
    I'm trying to find eveything inside a div using regexp. I'm aware that there probably is a smarter way to do this - but I've chosen regexp. so currently my regexp pattern looks like this: $gallery_pattern = '/([\s\S]*)<\/div/'; And it does the trick - somewhat. the problem is if i have two divs after each other - like this. <div class="gallery">text to extract here</div> <div class="gallery">text to extract from here as well</div> I want to extract the information from both divs, but my problem, when testing, is that im not getting the text in between as a result but instead: "text to extract here text to extract from here as well" So . to sum up. It skips the first end of the div. and continues on to the next. The text inside the div can contain "<", "/" and linebreaks. just so you know! Does anyone have a simple solution to this problem? Im still a regexp novice.

    Read the article

  • Why won't VS2010 RC use my existing types when I add a service reference?

    - by Johan Driessen
    I have a huge problem getting services references in VS2010 RC to use existing assemblies. Even though I have a class library with all the data contracts (classes marked with DataContract and properties with DataMember) that is shared between the service project and the consuming project (which is a class library), when I add a service reference, the data contracts are regenerated withing the service reference instead of using the existing types. When I was using VS2010 beta 2, this worked fine, and I have existing service references using the very same data contracts. But if I add a new service reference, or even update an old one, it won't use the existing types anymore. I have made a mini-test-solution, with one service, one data contract type and one console app as a consumer (all in the same solution), and there it seems to work, but that's no great comfort to me. Is there any way to see why it can't use the existing types? Edit to clearify. It works to generate the proxy classes with svcutil.exe, and point to the data contracts dll, like this: svcutil.exe http://localhost/MyService.svc /reference:[Path To DataContracts]\DataContracts.dll /n:*,MyProject.MyServiceReference /ct:System.Collections.Generic.List`1 The question is, what possible reason could there be for Visual Studio to generate its own datacontracts instead of using the existing ones even though the "reuse" checkbox is checked and the datacontracts assembly is referenced.

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

  • Using multiple named outlets and a wrapper view with no content in Emberjs

    - by user1889776
    I'm trying to use multiple named outlets with Ember.js. Is my approach below correct? Markup: <script type="text/x-handlebars" data-template-name="application"> <div id="mainArea"> {{outlet main_area}} </div> </script> <script type="text/x-handlebars" data-template-name="home"> <ul id="sections"> {{outlet sections}} </ul> <ul id="categories"> {{outlet categories}} </ul> </script> <script type="text/x-handlebars" data-template-name="sections"> {{#each section in controller}} <li><img {{bindAttr src="section.image"}}></li> {{/each}} </script> <script type="text/x-handlebars" data-template-name="categories"> {{#each category in controller}} <img {{bindAttr src="category.image"}}> {{/each}} </script>? JS Code: Here I set the content of the various controllers to data grabbed from a server and connect outlets with their corresponding views. Since the HomeController has no content, set its content to an empty object - a hack to get the rid of this error message: Uncaught Error: assertion failed: Cannot delegate set('categories' ) to the 'content' property of object proxy : its 'content' is undefined. App.Router = Ember.Router.extend({ enableLogging: false, root: Ember.Route.extend({ index: Ember.Route.extend({ route: '/', connectOutlets: function(router){ router.get('sectionsController').set('content',App.Section.find()); router.get('categoriesController').set('content', App.Category.find()); router.get('applicationController').connectOutlet('main_area', 'home'); router.get('homeController').connectOutlet('home', {}); router.get('homeController').connectOutlet('categories', 'categories'); router.get('homeController').connectOutlet('sections', 'sections'); } }) }) });

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • How can I exclude pages created from a specific template from the CQ5 dispatcher cache?

    - by Shawn
    I have a specific Adobe CQ5 (5.5) content template that authors will use to create pages. I want to exclude any page that is created from this template from the dispatcher cache. As I understand it currently, the only way I know to prevent caching is to configure dispatcher.any to not cache a particular URL. But in this case, the URL isn't known until a web author uses the template to create a page. I don't want to have to go back and modify dispatcher.any every time a page is created--or at least I want to automate this if there is no other way. I am using IIS for the dispatcher. The reason I don't want to cache the pages is because the underlying JSPs that render the content for these pages produce dynamic content, and the pages don't use querystrings and won't carry authentication headers. The pages will be created in unpredictable directories, so I don't know the URL pattern ahead of time. How can I configure things so that any page that is created from a certain template will be automatically excluded from the dispatcher cache? It seems like CQ ought to have some mechanism to respect HTTP response/caching headers. If the HTTP response headers specify that the response shouldn't be cached, it seems like the dispatcher shouldn't cache it--regardless of what dispatcher.any says. This is the CQ5 documentation I have been referencing.

    Read the article

  • Altering lazy-loaded object's private variables

    - by Kevin Pang
    I'm running into an issue with private setters when using NHibernate and lazy-loading. Let's say I have a class that looks like this: public class User { public int Foo {get; private set;} public IList<User> Friends {get; set;} public void SetFirstFriendsFoo() { // This line works in a unit test but does nothing during a live run with // a lazy-loaded Friends list Users(0).Foo = 1; } } The SetFirstFriendsFoo call works perfectly inside a unit test (as it should since objects of the same type can access each others private properties). However, when running live with a lazy-loaded Friends list, the SetFirstFriendsFoo call silently fails. I'm guessing the reason for this is because at run-time, the Users(0).Foo object is no longer of type User, but of a proxy class that inherits from User since the Friends list was lazy-loaded. My question is this: shouldn't this generate a run-time exception? You get compile-time exceptions if you try to access another class's private properties, but when you run into a situation like this is looks like the app just ignores you and continues along its way.

    Read the article

  • SHGetFolderPath returns path with question marks in it

    - by Colen
    Hi, Our application calls ShGetFolderPath when it runs, to get the My Documents folder. This normally works great. However, for three users - ???????, Jörg and Jörgen (see if you can spot the pattern!) - the call returns some very strange results. For example, for ???????, the call returns: c:\Users\???????\Documents I assume there's some sort of character encoding shenanigan going on here, possibly related to Unicode, but I don't have any experience with that sort of thing. How can I get a useful path to the folder (and other related folders) out of windows, without grovelling through registry keys for the information? In an email to me, ??????? ("Dmitry"), told me his "my documents" folder was actually located here: C:\Users\43D6~1\Documents So I know there's a way to get a "normal" version of the path out of Windows, I just don't know what it is. Background: Our application is not unicode-aware, and uses standard "char *" strings. How can we get the "normal" path? I'm not opposed to calling the "unicode" version of the function, then converting it to "normal" text, if that's possible. Converting the application entirely to use unicode is not an option here (we don't have the time). Thanks.

    Read the article

  • standard way to perform a clean shutdown with Boost.Asio

    - by Timothy003
    I'm writing a cross-platform server program in C++ using Boost.Asio. Following the HTTP Server example on this page, I'd like to handle a user termination request without using implementation-specific APIs. I've initially attempted to use the standard C signal library, but have been unable to find a design pattern suitable for Asio. The Windows example's design seems to resemble the signal library closest, but there's a race condition where the console ctrl handler could be called after the server object has been destroyed. I'm trying to avoid race conditions that cause undefined behavior as specified by the C++ standard. Is there a standard (correct) way to stop the server? So far: #include <csignal> #include <functional> #include <boost/asio.hpp> using std::signal; using boost::asio::io_service; extern "C" { static void handle_signal(int); } namespace { std::function<void ()> sighandler; } void handle_signal(int) { sighandler(); } int main() { io_service s; sighandler = std::bind(&io_service::stop, &s); auto res = signal(SIGINT, &handle_signal); // race condition? SIGINT raised before I could set ignore back if (res == SIG_IGN) signal(SIGINT, SIG_IGN); res = signal(SIGTERM, &handle_signal); // race condition? SIGTERM raised before I could set ignore back if (res == SIG_IGN) signal(SIGTERM, SIG_IGN); s.run(); // reset signals signal(SIGTERM, SIG_DFL); signal(SIGINT, SIG_DFL); // is it defined whether handle_signal can still be in execution at this // point? sighandler = nullptr; }

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >