Search Results

Search found 19878 results on 796 pages for 'multiple dispatch'.

Page 702/796 | < Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >

  • Unit Testing - Algorithm or Sample based ?

    - by ohadsc
    Say I'm trying to test a simple Set class public IntSet : IEnumerable<int> { Add(int i) {...} //IEnumerable implementation... } And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example: //OPTION 1 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //3 will be added 3 times var values = new List<int> {1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I know 3 is the only candidate to appear multiple times int counter = 0; foreach (int i in set) if (i == 3) counter++; Assert.AreEqual(1, counter); } My second option is to test for my condition generically: //OPTION 2 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //The following could even be a list of random numbers with a duplicate var values = new List<int> { 1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I am not using my prior knowledge of the sample data //the following line would work for any data CollectionAssert.AreEquivalent(new HashSet<int>(values), set); } Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This is the situation when you are testing your real life custom business logic. Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general" EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc )

    Read the article

  • Catch a PHP Object Instantiation Error

    - by Rob Wilkerson
    It's really irking me that PHP considers the failure to instantiate an object a Fatal Error (which can't be caught) for the application as a whole. I have set of classes that aren't strictly necessary for my application to function--they're really a convenience. I have a factory object that attempts to instantiate the class variant that's indicated in a config file. This mechanism is being deployed for message storage and will support multiple store types: DatabaseMessageStore FileMessageStore MemcachedMessageStore etc. A MessageStoreFactory class will read the application's preference from a config file, instantiate and return an instance of the appropriate class. It might be easy enough to slap a conditional around the instantiation to ensure that class_exists(), but MemcachedMessageStore extends PHP's Memcached class. As a result, the class_exists() test will succeed--though instantiation will fail--if the memcached bindings for PHP aren't installed. Is there any other way to test whether a class can be instantiated properly? If it can't, all I need to do is tell the user which features won't be available to them, but let them continue one with the application. Thanks.

    Read the article

  • Appropriate use of die()?

    - by letseatfood
    I create pages in my current PHP project by using the following template: <?php include 'bootstrap.php'; head(); ?> <!-- Page content here --> <?php foot(); ?> Is the following example an appropriate use of die()? Also, what sort of problems might this cause for me, if any? <?php include 'bootstrap.php'; head(); try { //Simulate throwing an exception from some class throw new Exception('Something went wrong!'); } catch(Exception $e) { ?> <p>Please fix the following errors:</p> <p><?php echo $e->getMessage(); ?></p> <?php foot(); die(); } //If no exception is thrown above, continue script doSomething(); doSomeOtherThing(); foot(); ?> ?> <?php foot(); ?> Basically, I have a script with multiple tasks on it and I am trying to set up a graceful way to notify the user of input errors while preventing the remaining portion of the script from executing. Thanks!

    Read the article

  • Thread mutex behaviour

    - by Alberteddu
    Hi there, I'm learning C. I'm writing an application with multiple threads; I know that when a variable is shared between two or more threads, it is better to lock/unlock using a mutex to avoid deadlock and inconsistency of variables. This is very clear when I want to change or view one variable. int i = 0; /** Global */ static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; /** Thread 1. */ pthread_mutex_lock(&mutex); i++; pthread_mutex_unlock(&mutex); /** Thread 2. */ pthread_mutex_lock(&mutex); i++; pthread_mutex_unlock(&mutex); This is correct, I think. The variable i, at the end of the executions, contains the integer 2. Anyway, there are some situations in which I don't know exactly where to put the two function calls. For example, suppose you have a function obtain(), which returns a global variable. I need to call that function from within the two threads. I have also two other threads that call the function set(), defined with a few arguments; this function will set the same global variable. The two functions are necessary when you need to do something before getting/setting the var. /** (0) */ /** Thread 1, or 2, or 3... */ if(obtain() == something) { if(obtain() == somethingElse) { // Do this, sometimes obtain() and sometimes set(random number) (1) } else { // Do that, just obtain(). (2) } } else { // Do this and do that (3) // If # of thread * 3 > 10, then set(3*10) For example. (4) } /** (5) */ Where I have to lock, and where I have to unlock? The situation can be, I think, even more complex. I will appreciate an exhaustive answer. Thank you in advance. —Alberto

    Read the article

  • The way cores, processes, and threads work exactly?

    - by unknownthreat
    I need a bit of an advice for understanding how this whole procedure work exactly. If I am incorrect in any part described below, please correct me. In a single core CPU, it runs each process in the OS, jumping around from one process to another to utilize the best of itself. A process can also have many threads, in which the CPU core runs through these threads when it is running on the respective process. Now, on a multiple core CPU, Do the cores run in every process together, or can the cores run separately in different processes at one particular point of time? For instance, you have program A running two threads, can a duo core CPU run both threads of this program? I think the answer should be yes if we are using something like OpenMP. But while the cores are running in this OpenMP-embedded process, can one of the core simply switch to other process? For programs that are created for single core, when running at 100%, why the CPU utilization of each core are distributed? (ex. A duo core CPU of 80% and 20%. The utilization percentage of all cores always add up to 100% for this case.) Do the cores try help each other run each thread of each process in some ways? Frankly, I'm not sure how this works exactly. Any advice is appreciated.

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • How to test a Grails Service that utilizes a criteria query (with spock)?

    - by user569825
    I am trying to test a simple service method. That method mainly just returns the results of a criteria query for which I want to test if it returns the one result or not (depending on what is queried for). The problem is, that I am unaware of how to right the corresponding test correctly. I am trying to accomplish it via spock, but doing the same with any other way of testing also fails. Can one tell me how to amend the test in order to make it work for the task at hand? (BTW I'd like to keep it a unit test, if possible.) The EventService Method public HashSet<Event> listEventsForDate(Date date, int offset, int max) { date.clearTime() def c = Event.createCriteria() def results = c { and { le("startDate", date+1) // starts tonight at midnight or prior? ge("endDate", date) // ends today or later? } maxResults(max) order("startDate", "desc") } return results } The Spock Specification package myapp import grails.plugin.spock.* import spock.lang.* class EventServiceSpec extends Specification { def event def eventService = new EventService() def setup() { event = new Event() event.publisher = Mock(User) event.title = 'et' event.urlTitle = 'ut' event.details = 'details' event.location = 'location' event.startDate = new Date(2010,11,20, 9, 0) event.endDate = new Date(2011, 3, 7,18, 0) } def "list the Events of a specific date"() { given: "An event ranging over multiple days" when: "I look up a date for its respective events" def results = eventService.listEventsForDate(searchDate, 0, 100) then: "The event is found or not - depending on the requested date" numberOfResults == results.size() where: searchDate | numberOfResults new Date(2010,10,19) | 0 // one day before startDate new Date(2010,10,20) | 1 // at startDate new Date(2010,10,21) | 1 // one day after startDate new Date(2011, 1, 1) | 1 // someday during the event range new Date(2011, 3, 6) | 1 // one day before endDate new Date(2011, 3, 7) | 1 // at endDate new Date(2011, 3, 8) | 0 // one day after endDate } } The Error groovy.lang.MissingMethodException: No signature of method: static myapp.Event.createCriteria() is applicable for argument types: () values: [] at myapp.EventService.listEventsForDate(EventService.groovy:47) at myapp.EventServiceSpec.list the Events of a specific date(EventServiceSpec.groovy:29)

    Read the article

  • Web Applications Development: Security practices for Application design

    - by Shyam
    Hi, As I am creating more web applications that are targeted for multiple users, I figured out that I have to start thinking about user management and security. At a glance and in my ideal world, all users belong to a group. Permissions and access is thus defined per group (and inherited by the users of that group). Logically, I have my group of administrators, which are identified with a level "7" (integer) clearance. A group of webusers have for example level "1". This in generally all works great for me, but I need some kind of list that I have to keep in mind how I secure my system, and some general practices. I am not looking for a specific environment; I want to learn the why's and how's. An example is privilege escalation. If someone would be able to "push" themselves inside a group with higher privileges, for example the Administration, how can I prevent this, or what measures should I take to have some sort of precaution? I don't like in that case to walk into a caveat. My question is basically: where can I find a good resource, list, policy, book that explains the security of web applications, the why's, the how's and readable if you don't have any experience in the realm of advanced security? I prefer a free resource, as I believe I couldn't be the first one who thought about this. Thank you for your answers, comments and feedback.

    Read the article

  • Stopping a MATLAB GUI callback

    - by leonhart88
    Dear All, I have a START and STOP button. When I hit START, i run a bunch of code in my callback. It's basically a sequential "script" that opens valves, dispenses water and then closes the valves...there is no while() loop and it doesn't repeat. I want to be able to stop this process at any time using the STOP button. Most of the related answers I've seen are in the cases where a while() loop is used. Some people have also suggested to periodically check if the STOP button was pressed (using a variable or handle variable). Since I do not have a while loop, I can't solve it that way. Also, I'd like to be able to exit immediately, without having to periodically check (because checking multiple times in my code would be ugly and confusing). Is there a way to terminate the callback which was interrupted by the STOP button? If not, is it possible to have the START button run a .m file and then have the STOP button terminate that .m file? The worst case scenario would be to check a variable periodically. UPDATE: Well, looks like the worst case scenario is what is suggested by MATLAB... http://www.mathworks.com/support/solutions/en/data/1-33IK85/index.html?product=ML&solution=1-33IK85 Thanks.

    Read the article

  • Can I share code & resources between Android projects without using a library?

    - by Tom
    The standard advice for sharing code & resources between Android projects is to use a library. Personally I find this works poorly if (a) the shared code changes a lot, or (b) your computer isn't fast enough. I also don't want to get into deploying multiple APK's, which seems to be necessary when I use dependent projects (i.e. Java Build Path, Projects Tab). On the other hand, sharing a folder of source code by using the Eclipse linked source feature works great (Java Build Path, Source tab, Link Source button), but for these two issues: 1) I can't use the same technique to share resources. I can create the link to the resources parent folder but then things get wonky and the shared resources don't get compiled (I'm using ADT 21). 2) So then I settle for copying the shared resources into each project, but this doesn't work because either. The shared code can't import the copy of its resources because it doesn't know the package name of the project that uses it. The solution I've been using is to access the resources dynamically, but that has become cumbersome as the number of resources grows. So, I need a solution to either (1) or (2), or I'll have to go back to a library project. (Or maybe there is another option I haven't thought of?)

    Read the article

  • iPhone Development - CoreData runtime error

    - by Mustafa
    I'm facing a strange CoreData issue. Here's the log: 2010-04-07 15:59:36.913 MyProject[263:207] <MyEntity: 0x180370> (entity: MyEntity; id: 0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> ; data: <fault>) 2010-04-07 15:59:36.918 MyProject[263:207] *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> has been invalidated.' I have a hierarchy of UITableViewControllers that use NSFetchedResultsController to populate the table, and when a particular row is selected, the detail view is shown. UITableView (MyMainEntity) UITableView (MyEntity) UITableView (MyEntity) detail view Both MyMainEntity UITableView and MyEntity UITableView use NSFetchedResultsController to show the records. Sometimes it crashes when i'm scrolling the tableView, and sometimes it crashes when i try to open the detail view. I can navigate to the MyEntity detail view multiple times before application crashes. What does this error mean? and how can i fix it!?

    Read the article

  • How do I do a count on that meet a specific condition, dependent on several has_many relationships i

    - by Angela
    I have a Model Campaign. A Campaign has many Events. Each Event has an attribute :days. A Campaign also has_many Contacts. Each Contact as a :date_entered attribute. The from_today(contact,event) method returns a number, which is the number of days from the contact's :date_entered till today minus the event's :days. In other words, a positive number shows the number of days from today till the :days of the event is elapsed. If it is negative, if means that the number of days that has elapsed since the :date_entered is greater than the :days attribute of an event. In other words, the event is overdue. What I would like to be able to do is do campaign.overdue and this would result in a total number of contacts that have an overdue event. It shouldn't count multiple events for a single contact, just one contact. How do I do that? It seems like I would need to cycle through all the events for every contact and keep a counter but I'm assuming that there is a better way.

    Read the article

  • Run Reporting Service in local mode and generate columns automatically?

    - by grady
    Hi, I have a SQL query right now which I want to use with the MS reporting services in my ASP.NET application. So I created a report in local mode (rdlc) and attached this to a report viewer. Since my query uses parameters, I created a stored procedure, which had exactly those parameters. In addition to that I had some textboxes which are used for entering the params for the query and a button to call the stored proc and to fill the datatset which is bound to the report viewer. This works, I press the button and according to what I entred the correct data is shown. Now my question: In the future I plan to have multiple reports (which will be selected in a dropdown) and I wonder if I can somehow just call the correct stored procedure and according to the columns which are *SELECT*ed in the procedure, the columns are shown in the report. Example: I select report1 from the dropdown (procedure for report 1 is called), 5 columns are shown in the reportviewer. I select report2 from dropdown (procedure for report 2 is called), 8 columns are shown. Is that possible somehow? Thanks :-)

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Hibernate NamingStrategy implementation that maintains state between calls

    - by Robert Petermeier
    Hi, I'm working on a project where we use Hibernate and JBoss 5.1. We need our entity classes to be mapped to Oracle tables that follow a certain naming convention. I'd like to avoid having to specify each table and column name in annotations. Therefore, I'm currently considering implementing a custom implementation of org.hibernate.cfg.NamingStrategy. The SQL naming conventions require the name of columns to have a suffix that is equivalent to a prefix of the table name. If there is a table "T100_RESOURCE", the ID column would have to be named "RES_ID_T100". In order to implement this in a NamingStrategy, the implementation would have to maintain state, i.e. the current class name it is creating the mappings for. It would rely on Hibernate to always call classToTableName() before propertyToColumnName() and to determine all column names by calling propertyToColumnName() before the next call to classToTableName() Is it safe to do that or are there situations where Hibernate will mix things up? I am not thinking of problems through multiple threads here (which can be solved by keeping the last class name in a ThreadLocal) but also of Hibernate deliberately calling this out of order in certain circumstances. For example Hibernate asking for mappings of three properties of class A, then one of class B, then again more attributes of class A.

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • Trying to edit an entity with data from dropdowns in MVC...

    - by user598352
    Hello! I'm having trouble getting my head around sending multiple models to a view in mvc. My problem is the following. Using EF4 I have a table with attributes organised by category. Couldn't post an image :-( [Have a table called attributes (AttributeTitle, AttributeName, CategoryID) connected to a table called Category (CategoryTitle).] What I want to do is be able to edit an attribute entity and have a dropdown of categories to choose from. I tried to make a custom viewmodel public class AttributeViewModel { public AttributeViewModel() { } public Attribute Attribute { get; set; } public IQueryable<Category> AllCategories { get; set; } } But it just ended up being a mess. <div class="editor-field"> <%: Html.DropDownList("Category", new SelectList((IEnumerable)Model.AllCategories, "CategoryID", "CategoryName")) %> </div> I was getting it back to the controller... [HttpPost] public ActionResult Edit(int AttributeID, FormCollection formcollection) { var _attribute = ProfileDB.GetAttribute(AttributeID); int _selcategory = Convert.ToInt32(formcollection["Category"]); _attribute.CategoryID = (int)_selcategory; try { UpdateModel(_attribute); (<---Error here) ProfileDB.SaveChanges(); return RedirectToAction("Index"); } catch (Exception e) { return View(_attribute); } } I've debugged the code and my _attribute looks correct and _attribute.CategoryID = (int)_selcategory updates the model, but then I get the error. Somewhere here I thought that there should be a cleaner way to do this, and that if I could only send two models to the view instead of having to make a custom viewmodel. To sum it up: I want to edit my attribute and have a dropdown of all of the available categories. Any help much appreciated!

    Read the article

  • Identity alternative for SQL Azure Federation : are Azure Queues or Service Bus Queues a good choice?

    - by JYL
    As many of developers, I'm looking for a way to integrate my existing app to SQL Azure Federations, and replacing the Identity columns (the primary keys of my tables) is a big problem. For many reasons, I do NOT want use GUID for my primary keys (please don't open the debate about the GUID or not, it's not my question : i just don't want a GUID, period). So I need to build a key provider to replace the "identity" feature of a standard SQL database. I'm using Entity Framework, so i can easily find one place to set the Id value just before the insert (by overriding the SaveChanges method of my ObjectContext class). I just need to find a "not too complicated" implementation for getting the current Id, which is "farm-ready". I've read this SO post : "ID Generation for Sharded Database (Azure Federated Database)" and "Synchronizing Multiple Nodes in Windows Azure from MSDN Magazine", but this solution sounds a bit complicated for me. I'm thinking about creating (automatically) one azure queue for each SQL table, which contain a pre-loaded list of consecutive integer. When I want an Id value, I just have to get a message from the queue (which becomes invisible and is deleted on the way), which give me the current available Id. About the choice between "Windows Azure Queues" and "Windows Azure Service Bus Queues", I prefere "Windows Azure Queues", due to the "high" latency of Service Bus Queues. I don't think that the lack of "ordering garantee" of Azure Queues is a problem. What do you think about that idea of using Azure Queues to provide Id values ? Do you see any argument to give up that idea ? Do you have a better idea, or even a good practice, to provider integer ids in SQL Azure Federation databases ? Thanks.

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Is it bad practise to initialise fields outside of an explicit constructor

    - by MrTortoise
    So its monday and we are arguing about coding practises. The examples here are a litttle too simple, but the real deal has several constructors. In order to initialise the simple values (eg dates to their min value) I have moved the code out of the constructors and into the field definitions. public class ConstructorExample { string _string = "John"; } public class ConstructorExample2 { string _string; public ConstructorExample2() { _string = "John"; } } How should it be done by the book. I tend to be very case by case and so am maybe a little lax abotu this kind of thing. However i feel that accams razor tells me to move the initialisation out of multiple constructors. Of course I could always move this shared initialisation into a private method. The question is essentially ... is initialising fields where they are defined as opposed to the constructor bad in any way? The argument I am facing is one of error handling, but i do not feel it is relevant as there are no possible exceptions that won't be picked up at compile time.

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • WCF data services - Limiting related objects returned based on critera

    - by Mike Morley
    I have an object graph consisting of a base employee object, and a set of related message objects. I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back. So far I have not been able to figure out a way of doing this: I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error "No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1). Attempting to use Top on the message related object doesn't work either. I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)... Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards. Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services? Thanks! M.

    Read the article

  • managing classes when everything is relative to a user in nhibernate (orm)

    - by Schotime
    Firstly I have three entities. Users, Roles, Items A user can have multiple Roles. An item gets assigned to one or more roles. Therefore a user will have access to a distinct set of items. Now there is a few ways I can see this working. There is a Collection on Users which has Roles via a many-to-many assoc. Then each Role in this collection will have its own collection of Items. So for each user I would have to get the User (using nhib and fetch the roles and items with it) then either do a selectMany on the Items in each Role to get all the Items for the user or do a couple of foreach's to port the data to a view or dto model. Create a db trigger to automatically insert into another table that just has the relationship between user and items so that on my User entity I only have a Items collections which has all the items assigned to me. Some other way that i can't think of yet, because I'm new to nHibernate. Now i know that the trigger doesn't feel right but I'm not sure how to do this. We also have some hierarchy later where a user may be in charge of a group of users. If anyone could shed some light on how they go about these scenarios in nhibernate or another orm that would be great, or point be in a direction. I know that in the past you would have to enter all combinations into a table so that the query worked, but when you know sql its not too bad. If you need any other info then let me know. Cheers

    Read the article

< Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >