Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 158/196 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Is there a recommended way to communicate scientific/engineering programming to C developers?

    - by ggkmath
    Hi, I have a lot of MATLAB code that needs to get ported to C (execution speed is critical for this work) as part of a back-end process for a web application. When I attempt to outsource this code to a C developer, I assume (correct me if I'm wrong) few C developers also understand MATLAB code (things like indexing and memory management are different, etc.). I wonder if there are any C developers out there that can recommend a procedure for me to follow to best communicate what the code does? For example, should I provide the MATLAB code and explain what it's doing line by line? Or, should I just provide the math/algorithm, explain it in plain English, and let the C developer implement it with this understanding in his/her own way (e.g. can I assume the developer understands how to work with complex math (i.e. imaginary numbers), how to generate histograms, perform an FFT, etc.)? Or, is there a better method? I expect I'm not the first to need to do this, so I wonder if any C developers out there ran into this situation and can share any conventional wisdom how they'd like this task to be transferred? Thanks in advance for any comments.

    Read the article

  • jQuery $.ajax calls success handler when reuqest fails because of browser reloading

    - by Martin
    I have the following code: $.ajax({ type: "POST", url: url, data: sendable, dataType: "json", success: function(data) { if(customprocessfunc) customprocessfunc(data); }, error: function(XMLHttpRequest, textStatus, errorThrown){ // error handler here } }); I have a timer which makes AJAX requests often. If I do not receive anything in 'data', I show an error message to the user - it means, something wnet bad on the server. The problem is when user reloads the page while the AJAX call is in progress. I can see in the firebug that the AJAX call fails (URL is colored red and no HTTP status is displayed) so I expect that jQuery will stop the reuqest or at least go to the error handler. But it goes to the success handler and passes null in the 'data' variable. As a result, when user reloads the page, sometimes he can see my big red message about unknown error (because data is null). Is there any way to make jQuery abort the request on complete reloading all at least not to call my success function? I have no way to know in the success handler why the data is null - did it came empty from the server or the call was aborted because of reload.

    Read the article

  • ActiveModel::MassAssignmentSecurity::Error in CustomersController#create (attr_accessible is set)

    - by megabga
    In my controller, I've got error when create action and try create model [can't mass-assignment], but in my spec, my test of mass-assignment model its pass!?! My Model: class Customer < ActiveRecord::Base attr_accessible :doc, :doc_rg, :name, :birthday, :name_sec, :address, :state_id, :city_id, :district_id, :customer_pj, :is_customer, :segment_id, :activity_id, :person_type, :person_id belongs_to :person , :polymorphic => true, dependent: :destroy has_many :histories has_many :emails def self.search(search) if search conditions = [] conditions << ['name LIKE ?', "%#{search}%"] find(:all, :conditions => conditions) else find(:all) end end end I`ve tired set attr_accessible in controller too, in my randomized way. the Controller: class CustomersController < ApplicationController include ActiveModel::MassAssignmentSecurity attr_accessible :doc, :doc_rg, :name, :birthday, :name_sec, :address, :state_id, :city_id, :district_id, :customer_pj, :is_customer autocomplete :business_segment, :name, :full => true autocomplete :business_activity, :name, :full => true [...] end The test, my passed test describe "accessible attributes" do it "should allow access to basics fields" do expect do @customer.save end.should_not raise_error(ActiveModel::MassAssignmentSecurity::Error) end end The error: ActiveModel::MassAssignmentSecurity::Error in CustomersController#create Can't mass-assign protected attributes: doc, doc_rg, name_sec, address, state_id, city_id, district_id, customer_pj, is_customer https://github.com/megabga/crm 1.9.2p320 Rails 3.2 MacOS pg

    Read the article

  • g-wan - reproducing the performance claims

    - by user2603628
    Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan then pointing wrk at it (using a null file null.html) wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s, 32.08MB read Socket errors: connect 0, read 37, write 0, timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB .. very poor performance, in fact there seems to be some kind of huge latency issue. During the test gwan is 200% busy and wrk is 67% busy. Pointing at nginx, wrk is 200% busy and nginx is 45% busy: wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s, 540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB Pointing weighttpd at nginx gives even faster results: /usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests, 666667 total requests spawning thread #2: 167 concurrent requests, 666667 total requests spawning thread #3: 166 concurrent requests, 666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data The server is a virtual 8 core dedicated server (bare metal), under KVM Where do I start looking to identify the problem gwan is having on this platform ? I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.

    Read the article

  • UIAlertView Will not show

    - by John
    I have a program that is requesting a JSON string. I have created a class that contains the connect method below. When the root view is coming up, it does a request to this class and method to load up some data for the root view. When I test the error code (by changing the URL host to 127.0.0.1), I expect the Alert to show. Behavior is that the root view just goes black, and the app aborts with no alert. No errors in debug mode on the console, either. Any thoughts as to this behavior? I've been looking around for hints to this for hours to no avail. Thanks in advance for your help. Note: the conditional for (error) is called, as well as the UIAlertView code. - (NSString *)connect:(NSString *)urlString { NSString *jsonString; UIApplication *app = [UIApplication sharedApplication]; app.networkActivityIndicatorVisible = YES; NSError *error = nil; NSURLResponse *response = nil; NSURL *url = [[NSURL alloc] initWithString:urlString]; NSURLRequest *req = [NSURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSData *_response = [NSURLConnection sendSynchronousRequest: req returningResponse: &response error: &error]; if (error) { /* inform the user that the connection failed */ //AlertWithMessage(@"Connection Failed!", message); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Oopsie!" message:@"Unable to connect! Try later, thanks." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [alert release]; } else { jsonString = [[[NSString alloc] initWithData:_response encoding:NSUTF8StringEncoding] autorelease]; } app.networkActivityIndicatorVisible = NO; [url release]; return jsonString; }

    Read the article

  • Ruby on Rails controller and architecture with cells

    - by dt
    I decided to try to use the cells plugin from rails: http://cells.rubyforge.org/community.html given that I'm new to Ruby and very used to thinking in terms of components. Since I'm developing the app piecemeal and then putting it together piece by piece, it makes sense to think in terms of components. So, I've been able to get cells working properly inside a single view, which calls a partial. Now, what I would like to be able to do (however, maybe my instincts need to be redirected to be more "Rails-y"), is call a single cell controller and use the parameters to render one output vs. another. Basically, if there were a controller like: def index params[:responsetype] end def processListResponse end def processSearchResponse end And I have two different controller methods that I want to respond to based on the params response type, where I have a single template on the front end and want the inner "component" to render differently depending on what type of request is made. That allows me to reuse the same front-end code. I suppose I could do this with an ajax call instead and just have it rerender the component on the front end, but it would be nice to have the option to do it either way and to understand how to architect Rails a bit better in the process. It seems like there should be a "render" option from within the cells framework to render to a certain controller or view, but it's not working like I expect and I don't know if I'm even in the ballpark. Thanks!

    Read the article

  • Do new Apple SDKs patch previous releases?

    - by Francisco Garcia
    A new iPhone will be soon out there along a new iOS release. Sooner or later there will also be a Xcode upgrade with the SDK for iOS 6 Does Apple do any type of bugfix on previous SDKs or are bugfixes just solved on new releases? As an example: Core Data with iCloud still have some issues but it is getting better over time. Let's say I have an app that really depends on that combo. I would require iOS6, however not all users upgrade the handsets. Ideally an app compiled with a newer XCode release could patch some error on previous SDKs if the target is set to an older iOS release. Should I expect that a project compiled with future SDK releases to work better on devices running on older iOS versions? will be some SDKs bugfixes backported? I understand that there are some bugs that cannot be fixed without an iOS update on the client. Also that it is a lot of work (and unlikely) to backport bugfixes. I am just wondering what is the normal release policy of Apple.

    Read the article

  • Submit a form and get a JSON response with jQuery

    - by Leopd
    I expect this is easy, but I'm not finding a simple explanation anywhere of how to do this. I have a standard HTML form like this: <form name="new_post" action="process_form.json" method=POST> <label>Title:</label> <input id="post_title" name="post.title" type="text" /><br/> <label>Name:</label><br/> <input id="post_name" name="post.name" type="text" /><br/> <label>Content:</label><br/> <textarea cols="40" id="post_content" name="post.content" rows="20"></textarea> <input id="new_post_submit" type="submit" value="Create" /> </form> I'd like to have javascript (using jQuery) submit the form to the form's action (process_form.json), and receive a JSON response from the server. Then I'll have a javascript function that runs in response to the JSON response, like function form_success(json) { alert('Your form submission worked'); // process json response } How do I wire up the form submit button to call my form_success method when done? Also it should override the browser's own navigation, since I don't want to leave the page. Or should I move the button out of the form to do that?

    Read the article

  • Is using advanced constructs (function, new, function calls) in JSON safe?

    - by Vilx-
    JSON is a nice way to pass complex data from my server side code to client side JavaScript. For example, in PHP I can write: <script type="text/javascript> var MyComplexVariable = <?= BigFancyObjectGraph.GetJSON() ?>; DoMagic(MyComplexVariable); </script> This is pretty cool, but sometimes you want to pass more than basic date, like dates or even function definitions. There is a simple and straightforward way of doing it too, like: <script type="text/javascript> var MyComplexVariable = { 'SimpleProperty' : 42, 'FunctionProperty' : function() { return 6*7; }, 'DateProperty' : new Date(989539200000), 'ArbitraryProperty' : GetTheMeaningOfLifeUniverseAndEverything() }; DoMagic(MyComplexVariable); </script> And this works like a charm on all browsers I've seen so far. But according to JSON.org such syntax is invalid. On the other hand, I've seen this syntax being used in very many places, including some popular JavaScript frameworks. So... Can I expect any problems if I use "unsupported" JSON features like the above? Why is it wrong or not?

    Read the article

  • Spring Data Neo4J @Indexed(unique = true) not working

    - by Markus Lamm
    I'm new to Neo4J and I have, probably an easy question. There're NodeEntitys in my application, a property (name) is annotated with @Indexed(unique = true) to achieve the uniqueness like I do in JPA with @Column(unique = true). My problem is, that when I persist an entity with a name that already exists in my graph, it works fine anyway. But I expected some kind of exception here...?! Here' s an overview over basic my code: @NodeEntity public abstract class BaseEntity implements Identifiable { @GraphId private Long entityId; ... } public class Role extends BaseEntity { @Indexed(unique = true) private String name; ... } public interface RoleRepository extends GraphRepository<Role> { Role findByName(String name); } @Service public class RoleServiceImpl extends BaseEntityServiceImpl<Role> implements { private RoleRepository repository; @Override @Transactional public T save(final T entity) { return getRepository().save(entity); } } And this is my test: @Test public void testNameUniqueIndex() { final List<Role> roles = Lists.newLinkedList(service.findAll()); final String existingName = roles.get(0).getName(); Role newRole = new Role.Builder(existingName).build(); newRole = service.save(newRole); } That's the point where I expect something to go wrong! How can I ensure the uniqueness of a property, without checking it for myself?? THANKS IN ADVANCE FOR ANY IDEAS!! P.S.: I'm using neo4j 1.8.M07, spring-data-neo4j 2.1.0.BUILD-SNAPSHOT and Spring 3.1.2.RELEASE.

    Read the article

  • iphone viewWillAppear not firing

    - by chzk
    I've read numerous posts about people having problems with viewWillAppear when you do not create your view heirarchy JUST right. My problem is I can't figure out what that means. If I create a RootViewController and call addSubView on that controller, I would expect the added view(s) to be wired up for viewWillAppear events. Does anyone have an example of a complex programmatic view heirarchy that successfully recieves viewWillAppear events at every level? Apple Docs state: Warning: If the view belonging to a view controller is added to a view hierarchy directly, the view controller will not receive this message. If you insert or add a view to the view hierarchy, and it has a view controller, you should send the associated view controller this message directly. Failing to send the view controller this message will prevent any associated animation from being displayed. The problem is that they don't describe how to do this. What the hell does "directly" mean. How do you "indirectly" add a view. I am fairly new to Cocoa and iPhone so it would be nice if there were useful examples from Apple besides the basic Hello World crap. Any help is greatly appreciated...

    Read the article

  • How can I introduce a regex action to match the first element in a Catalyst URI ?

    - by RET
    Background: I'm using a CRUD framework in Catalyst that auto-generates forms and lists for all tables in a given database. For example: /admin/list/person or /admin/add/person or /admin/edit/person/3 all dynamically generate pages or forms as appropriate for the table 'person'. (In other words, Admin.pm has actions edit, list, add, delete and so on that expect a table argument and possibly a row-identifying argument.) Question: In the particular application I'm building, the database will be used by multiple customers, so I want to introduce a URI scheme where the first element is the customer's identifier, followed by the administrative action/table etc: /cust1/admin/list/person /cust2/admin/add/person /cust2/admin/edit/person/3 This is for "branding" purposes, and also to ensure that bookmarks or URLs passed from one user to another do the expected thing. But I'm having a lot of trouble getting this to work. I would prefer not to have to modify the subs in the existing framework, so I've been trying variations on the following: sub customer : Regex('^(\w+)/(admin)$') { my ($self, $c, @args) = @_; #validation of captured arg snipped.. my $path = join('/', 'admin', @args); $c->request->path($path); $c->dispatcher->prepare_action($c); $c->forward($c->action, $c->req->args); } But it just will not behave. I've used regex matching actions many times, but putting one in the very first 'barrel' of a URI seems unusually traumatic. Any suggestions gratefully received.

    Read the article

  • Unexpected results when looking at ASCII codes in C++.

    - by Columbo
    Hello, The bit of code below is extracting ASCII codes from characters. When I convert characters in the normal ASCII region I get the value I expect. When I convert £ and € from the extened region I get a load of 1's padding the INT that I'm storing the character in. e.g. the output of the below is: 45 (ascii E as expected) FFFFFF80 (extended ascii € as expected but padded with ones) It's not causing me an issue but I'm just wondering why this happens. Here's the code... unsigned int asciichar[3]; string cTextToEncode = "E€"; for (unsigned int i = 0; i < cTextToEncode.length(); i++) { asciichar[i] = (unsigned int)cTextToEncode[i]; cout << hex << asciichar[i] << "\n"; } Can anyone explain why this is? Thanks

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Returning reference to object is not changing the address in c++

    - by ashish-sangwan
    I am trying to understand functions returning a reference. For that I have written a simple program: #include<iostream> using namespace std; class test { int i; friend test& func(); public: test(int j){i=j;} void show(){cout<<i<<endl;} }; test& func() { test temp(10); return temp; //// Address of temp=0xbfcb2874 } int main() { test obj1(50); // Address of obj1=0xbfcb28a0 func()=obj1; <= Problem:The address of obj1 is not changing obj1.show(); // // Address of obj1=0xbfcb28a0 return 0; } I ran the program using gdb and observed that the address of obj1 still remains same, but I expect it to get changed to 0xbfcb2874. I am not clear with the concept. Please help.

    Read the article

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

  • Dataset holds a table called "Table", not the table I pass in?

    - by dotnetdev
    Hi, I have the code below: string SQL = "select * from " + TableName; using (DS = new DataSet()) using (SqlDataAdapter adapter = new SqlDataAdapter()) using (SqlConnection sqlconn = new SqlConnection(connectionStringBuilder.ToString())) using (SqlCommand objCommand = new SqlCommand(SQL, sqlconn)) { sqlconn.Open(); adapter.SelectCommand = objCommand; adapter.Fill(DS); } System.Windows.Forms.MessageBox.Show(DS.Tables[0].TableName); return DS; However, every time I run this code, the dataset (DS) is filled with one table called "Table". It does not represent the table name I pass in as the parameter TableName and this parameter does not get mutated so I don't know where the name Table comes from. I'd expect the table to be the same as the tableName parameter I pass in? Any idea why this is not so? EDIT: Important fact: This code needs to return a dataset because I use the dataRelation object in another method, which is dependent on this, and without using a dataset, that method throws an exception. The code for that method is: DataRelation PartToIntersection = new DataRelation("XYZ", this.LoadDataToTable(tableName).Tables[tableName].Columns[0], // Treating the PartStat table as the parent - .N this.LoadDataToTable("PartProducts").Tables["PartProducts"].Columns[0]); // 1 // PartsProducts (intersection) to ProductMaterial DataRelation ProductMaterialToIntersection = new DataRelation("", ds.Tables["ProductMaterial"].Columns[0], ds.Tables["PartsProducts"].Columns[1]); Thanks

    Read the article

  • Problem shrinking with StretchBlt()

    - by SparkyNZ
    Hi. I have some code that paints my own rectangular buttons based on a source bitmap. Most of the time the destination buttons are bigger than my source bitmap image and StretchBlt works fine. However, when the destination is smaller than the source image, StretchBlt refuses to fill the entire destination area. I know StretchBlt isn't great on quality when it comes to scaling down images but I'm not too concerned about that. I just don't want missing pixels. Here a link with the source image at the top and destination at the bottom: link text Note, I am actually shrinking parts of the source image into the destination. I am not shrinking the entire image down. So for example, I copy the corners size for size with BitBlt() then I stretch (squash) the horizontal pixel data between the corners from the source image into the destination DC. There is no fault with my source and destination coordinates. If I change from SRCCOPY to WHITENESS, the entire area fills with white as you'd expect. There is no grey bar where pixels haven't copied as you see in the Broken.bmp image above. Has anyone had this problem before, and if so, can somebody please suggest a solution? Cheers

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • iOS UIImageView dataWithContentsOfURL returning empty

    - by user761389
    I'm trying to display an image from a URL in a UIImageView and I'm seeing some very peculiar results. The bit of code that I'm testing with is below imageURL = @"http://images.shopow.co.uk/image/user_dyn/1073/32/32"; imageURL = @"http://images.shopow.co.uk/assets/profile_images/default/32_32/avatar-male-01.jpg"; NSURL *imageURLRes = [NSURL URLWithString:imageURL]; NSData *imageData = [NSData dataWithContentsOfURL:imageURLRes]; UIImage *image = [UIImage imageWithData:imageData]; NSLog(@"Image Data: %@", imageData); In it's current form I can see data in the output window which is what I'd expect. However if comment out the second imageURL so I'm referencing the first I'm getting empty data and therefore nil is being returned by imageWithData. What is possibly more confusing is that the first image is basically the same as the second but it's been through a PHP processing script. I'm nearly certain that it isn't the script that's causing the issue because if I use this instead imageURL = @"http://images.shopow.co.uk/image/product_dynimg/389620/32/32" the image is displayed and this uses the same image processing script. I'm struggling to find any difference in the images that would cause this to occur. Any help would be appreciated.

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • form target iframe not working in IE7- in facebook fan tab

    - by greatcaesarsghost
    This issue is similar to the one discussed in this thread, only mine is in a Facebook fan page tab (FBML/FBJS). The fix described in the referenced question works fine outside of Facebook, but for whatever reason I can't get it to work in the posted FBJS. Here's a stripped down version of what I'm trying to do: <script type="text/javascript"> function doit() { document.getElementById('msg').setInnerXHTML('<iframe id="testframe" name="testframe" frameborder="0" />'); } </script> <div id="msg"></div> <form action="http://somesite.com/whatever.php" method="post" id="testform" name="testform" target="testframe"> <input type="text" id="txt1" name="txt1" /> <input type="submit" id="btn" name="btn" value="test" onclick="doit();" /> </form> -- This behaves as you'd expect in all browsers, except IE <= 7, where it opens a new window. IE's dev tool shows the iframe as having a 'submitName' attribute, but no 'name' attribute. Even by manually setting the name (document.getElementbyId('testframe').setName('testframe') fails to work the way it would outside of Facebook. Has anyone run into this same issue, and if so, is there any way around it? Thank you.

    Read the article

  • Entity framework self referencing loop detected

    - by Lyd0n
    I have a strange error. I'm experimenting with a .NET 4.5 Web API, Entity Framework and MS SQL Server. I've already created the database and set up the correct primary and foreign keys and relationships. I've created a .edmx model and imported two tables: Employee and Department. A department can have many employees and this relationship exists. I created a new controller called EmployeeController using the scaffolding options to create an API controller with read/write actions using Entity Framework. In the wizard, selected Employee as the model and the correct entity for the data context. The method that is created looks like this: // GET api/Employee public IEnumerable<Employee> GetEmployees() { var employees = db.Employees.Include(e => e.Department); return employees.AsEnumerable(); } When I call my API via /api/Employee, I get this error: ...The 'ObjectContent`1' type failed to serialize the response body for content type 'application/json; ...System.InvalidOperationException","StackTrace":null,"InnerException":{"Message":"An error has occurred.","ExceptionMessage":"Self referencing loop detected with type 'System.Data.Entity.DynamicProxies.Employee_5D80AD978BC68A1D8BD675852F94E8B550F4CB150ADB8649E8998B7F95422552'. Path '[0].Department.Employees'.","ExceptionType":"Newtonsoft.Json.JsonSerializationException","StackTrace":" ... Why is it self referencing [0].Department.Employees? That doesn't make a whole lot of sense. I would expect this to happen if I had circular referencing in my database but this is a very simple example. What could be going wrong?

    Read the article

  • Entity Framework not populating context

    - by stimms
    I'm just starting out with some entity framework exploration, I figured it was time to see what everybody was complaining about. I am running into an issue where the entities don't seem to be returning any of the object context. I generated the model from a database with three tables which link to one another. Courses Instructors CanTeach Relationships are as you would expect: a course can relate to multiple CanTeach entities and an instructor can also relate to multiple CanTeach entities. I also added an OData service to my project which also makes use of the same model. So I can run queries like from a in CanTeach where a.Instructor.FirstName == "Barry" select new { Name = a.Instructor.FirstName + " " + a.Instructor.LastName, Course = a.Course.Name} without issue against the OData endpoint using LINQPad. However when I do a simple query like public Instructor GetInstructorFromID(int ID) { return context.Instructors.Where(i => i.ID == ID).FirstOrDefault(); } The CanTeach list is empty. I know everything in EF is lazy loaded and it is possible that my context is out of scope by the time I look at the object context, however even trying to get the object context as soon as the query is run results in and empty object context. What am I doing wrong?

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >