Search Results

Search found 9106 results on 365 pages for 'course'.

Page 292/365 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

  • MTD Expression on a single column - SSRS

    - by Eric
    I need a bit of help here. I have been unable to create an 'Month To Date' expression to a single column on SSRS. I tested the following expression from a similar question in the forum, but it gives me a squiggly line below the variable 'd' =IIF(Fields!CreateDate.Value >= DateAdd(d,-7,Today()), Sum(Fields!Sales.Value), 0) If I run it, of course I got an error telling me that 'd' is not declared. ;) I changed it to ... DateAdd("d",-7,Today()), Sum(Fields!Sales.Value) ... following the example and the squiggly goes below the brackets of "today()" and needless to say it...but still not working. I tried a Dateadd(mm..Datediff ... and still nothing. My report has the following columns: Country | CustomerName | Sales | InvNotProcessed | Open Order | Orders | TotalbyCust What I need is to show the new MTD sales only in the column named "Sales" while the other three shows the rest of the query, which should be open as some orders may take quite a while to manufacture and invoice. the last column sums the totals of all other columns. Any help will be much appreciated. Regards, Eric

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • How to deal with unknown entity references?

    - by Chris
    I'm parsing (a lot of) XML files that contain entity references which i dont know in advance (can't change that fact). For example: xml = "<tag>I'm content with &funny; &entity; &references;.</tag>" when i try to parse this using the following code: final DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); final DocumentBuilder db = dbf.newDocumentBuilder(); final InputSource is = new InputSource(new StringReader(xml)); final Document d = db.parse(is); i get the following exception: org.xml.sax.SAXParseException: The entity "funny" was referenced, but not declared. but, what i do want to achieve is, that the parser replaces every entity that is not declared (unknown to the parser) with an empty String ''. Or even better, is there a way to pass a map to the parser like: Map<String,String> entityMapping = ... entityMapping.put("funny","very"); entityMapping.put("entity","important"); entityMapping.put("references","stuff"); so that i could do the following: final DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); final DocumentBuilder db = dbf.newDocumentBuilder(); final InputSource is = new InputSource(new StringReader(xml)); db.setEntityResolver(entityMapping); final Document d = db.parse(is); if i would obtain the text from the document using this example code i should receive: I'm content with very important stuff. Any suggestions? Of course, i already would be happy to just replace the unknown entity's with empty strings. Thanks,

    Read the article

  • Accessing Instance Variables from NSTimer selector

    - by Timbo
    Firstly newbie question: What's the difference between a selector and a method? Secondly newbie question (who would have thought): I need to loop some code based on instance variables and pause between loops until some condition (of course based on instance variables) is met. I've looked at sleep, I've looked at NSThread. In both discussions working through those options many asked why don't I use NSTimer, so here I am. Ok so it's simple enough to get a method (selector? ) to fire on a schedule. Problem I have is that I don't know how to see instance variables I've set up outside the timer from within the code NSTimer fires. I need to see those variables from the NSTimer selector code as I 1) will be updating their values and 2) will set labels based on those values. Here's some code that shows the concept… eventually I'd invalidate the timers based on myVariable too, however I've excluded that for code clarity. MyClass *aMyClassInstance = [MyClass new]; [aMyClassInstance setMyVariable:0]; [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(doStuff) userInfo:nil repeats:YES]; [NSTimer scheduledTimerWithTimeInterval:5.0 target:self selector:@selector(doSomeOtherStuff) userInfo:nil repeats:YES]; - (void) doStuff { [aMyClassInstance setMyVariable:11]; // don't actually have access to set aMyClassInstance.myVariable [self updateSomeUILabel:[NSNumber numberWithInt:aMyClassInstance.myVariable]]; // don't actually have access to aMyClassInstance.myVariable } - (void) doSomeOtherStuff { [aMyClassInstance setMyVariable:22]; // don't actually have access to set aMyClassInstance.myVariable [self updateSomeUILabel:[NSNumber numberWithInt:aMyClassInstance.myVariable]]; // don't actually have access to aMyClassInstance.myVariable } - (void) updateSomeUILabel:(NSNumber *)arg{ int value = [arg intValue]; someUILabel.text = [NSString stringWithFormat:@"myVariable = %d", value]; // Updates the UI with new instance variable values }

    Read the article

  • Images not shown when publishing MVC application to virtual directory inside default web-site

    - by Michael Sagalovich
    Hi! I am developing an application using ASP.NET MVC 1 and VS2008. When I deploy it to the default web-site in my IIS6 on WinXP, all images are shown correctly, path to any given image is localhost/Content/ImagesUI/[image].[ext] When I deploy it to the virtual directory, created inside the same site, any image request returns IIS standard 404 error page, while the path is localhost/[DirectoryName]/Content/ImagesUI/[image].[ext] - that seems to be correct, true? I am mapping .* to c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll in both site and directory configurations. When this mapping is removed, images are shown correctly. However, all other URLs do not work, of course. When I am trying to open an image in browser using the URL to it, aspnet_wp.exe process is not even started (I restarted IIS to test it) - I merely get 404 or the image, depending on the presence of * mapping. Thus, I suppose it has nothing to do neither with routes registered for MVC, nor with ASP. The solution that I found is to make Content folder a virtual directory and remove * mapping from its configuration. While that's OK to some extent, I want a better solution, which will explain and eliminate the cause of the problem, not just workaround it. Thanks for your help!

    Read the article

  • Repository Pattern Standardization of methods

    - by Nix
    All I am trying to find out the correct definition of the repository pattern. My original understanding was this (extremely dubmed down) Separate your Business Objects from your Data Objects Standardize access methods in data access layer. I have really seen 2 different implementations. Implementation 1 : public Interface IRepository<T>{ List<T> GetAll(); void Create(T p); void Update(T p); } public interface IProductRepository: IRepository<Product> { //Extension methods if needed List<Product> GetProductsByCustomerID(); } Implementation 2 : public interface IProductRepository { List<Product> GetAllProducts(); void CreateProduct(Product p); void UpdateProduct(Product p); List<Product> GetProductsByCustomerID(); } Notice the first is generic Get/Update/GetAll, etc, the second is more of what I would define "DAO" like. Both share an extraction from your data entities. Which I like, but i can do the same with a simple DAO. However the second piece standardize access operations I see value in, if you implement this enterprise wide people would easily know the set of access methods for your repository. Am I wrong to assume that the standardization of access to data is an integral piece of this pattern ? Rhino has a good article on implementation 1, and of course MS has a vague definition and an example of implementation 2 is here.

    Read the article

  • Basic user authentication with records in AngularFire

    - by ajkochanowicz
    Having spent literally days trying the different, various recommended ways to do this, I've landed on what I think is the most simple and promising. Also thanks to the kind gents from this SO question: Get the index ID of an item in Firebase AngularFire Curent setup Users can log in with email and social networks, so when they create a record, it saves the userId as a sort of foreign key. Good so far. But I want to create a rule so twitter2934392 cannot read facebook63203497's records. Off to the security panel Match the IDs on the backend Unfortunately, the docs are inconsistent with the method from is firebase user id unique per provider (facebook, twitter, password) which suggest appending the social network to the ID. The docs expect you to create a different rule for each of the login method's ids. Why anyone using 1 login method would want to do that is beyond me. (From: https://www.firebase.com/docs/security/rule-expressions/auth.html) So I'll try to match the concatenated auth.provider with auth.id to the record in userId for the respective registry item. According to the API, this should be as easy as In my case using $registry instead of $user of course. { "rules": { ".read": true, ".write": true, "registry": { "$registry": { ".read": "$registry == auth.id" } } } } But that won't work, because (see the first image above), AngularFire sets each record under an index value. In the image above, it's 0. Here's where things get complicated. Also, I can't test anything in the simulator, as I cannot edit {some: 'json'} To even authenticate. The input box rejects any input. My best guess is the following. { "rules": { ".write": true, "registry": { "$registry": { ".read": "data.child('userId').val() == (auth.provider + auth.id)" } } } } Which both throws authentication errors and simultaneously grants full read access to all users. I'm losing my mind. What am I supposed to do here?

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • How can I make this simple C# generics factory work?

    - by Kevin Brassen
    I have this design: public interface IFactory<T> { T Create(); T CreateWithSensibleDefaults(); } public class AppleFactory : IFactory<Apple> { ... } public class BananaFactory : IFactory<Banana> { ... } // ... The fictitious Apple and Banana here do not necessarily share any common types (other than object, of course). I don't want clients to have to depend on specific factories, so instead, you can just ask a FactoryManager for a new type. It has a FactoryForType method: IFactory<T> FactoryForType<T>(); Now you can invoke the appropriate interface methods with something like FactoryForType<Apple>().Create(). So far, so good. But there's a problem at the implementation level: how do I store this mapping from types to IFactory<T>s? The naive answer is an IDictionary<Type, IFactory<T>>, but that doesn't work since there's no type covariance on the T (I'm using C# 3.5). Am I just stuck with an IDictionary<Type, object> and doing the casting manually?

    Read the article

  • Oracle Insert via Select from multiple tables where one table may not have a row

    - by Mikezx6r
    I have a number of code value tables that contain a code and a description with a Long id. I now want to create an entry for an Account Type that references a number of codes, so I have something like this: insert into account_type_standard (account_type_Standard_id, tax_status_id, recipient_id) ( select account_type_standard_seq.nextval, ts.tax_status_id, r.recipient_id from tax_status ts, recipient r where ts.tax_status_code = ? and r.recipient_code = ?) This retrieves the appropriate values from the tax_status and recipient tables if a match is found for their respective codes. Unfortunately, recipient_code is nullable, and therefore the ? substitution value could be null. Of course, the implicit join doesn't return a row, so a row doesn't get inserted into my table. I've tried using NVL on the ? and on the r.recipient_id. I've tried to force an outer join on the r.recipient_code = ? by adding (+), but it's not an explicit join, so Oracle still didn't add another row. Anyone know of a way of doing this? I can obviously modify the statement so that I do the lookup of the recipient_id externally, and have a ? instead of r.recipient_id, and don't select from the recipient table at all, but I'd prefer to do all this in 1 SQL statement.

    Read the article

  • Click at specified client area

    - by VixinG
    Click doesn't work - I don't know why and can't find a solution :( ie. Click(150,215) should move mouse to the client area and click there. [DllImport("user32.dll")] private static extern bool ScreenToClient(IntPtr hWnd, ref Point lpPoint); [DllImport("user32", SetLastError = true)] private static extern int SetCursorPos(int x, int y); static void MouseMove(int x, int y) { Point p = new Point(x * -1, y * -1); ScreenToClient(hWnd, ref p); p = new Point(p.X * -1, p.Y * -1); SetCursorPos(p.X, p.Y); } static void Click(int x, int y) { MouseMove(x, y); SendMessage(hWnd, WM_LBUTTONDOWN, (IntPtr)0x1, new IntPtr(y * 0x10000 + x)); SendMessage(hWnd, WM_LBUTTONUP, (IntPtr)0x1, new IntPtr(y * 0x10000 + x)); } Edit: Of course I can use mouse_event for that, but I would like to see a solution for SendMessage()... [DllImport("user32.dll")] static extern void mouse_event(int dwFlags, int dx, int dy, int dwData, int dwExtraInfo); const int LEFTDOWN = 0x00000002; const int LEFTUP = 0x00000004; static void Click(int x, int y) { MouseMove(x, y); mouse_event((int)(LEFTDOWN), 0, 0, 0, 0); mouse_event((int)(LEFTUP), 0, 0, 0, 0); }

    Read the article

  • LINQ to SQL - database relationships won't update after submit

    - by Quantic Programming
    I have a Database with the tables Users and Uploads. The important columns are: Users -> UserID Uploads -> UploadID, UserID The primary key in the relationship is Users -> UserID and the foreign key is Uploads -> UserID. In LINQ to SQL, I do the following operations: Retrieve files var upload = new Upload(); upload.UserID = user.UserID; upload.UploadID = XXX; db.Uploads.InsertOnSubmit(upload) db.SubmitChanges(); If I do that and rerun the application (and the db object is re-built, of course) - if do something like this: foreach(var upload in user.Uploads) I get all the uploads with that user's ID. (like added in the previous example) The problem is, that my application, after adding an upload an submitting changes, doesn't update the user.Uploads collection. i.e - I don't get the newly added uploads. The user object is stored in the Session object. At first, I though that the LINQ to SQL Framework doesn't update the reference of the object, therefore I should simply "reset" the user object from a new SQL request. I mean this: Session["user"] = db.Users.Where(u => u.UserID == user.UserID).SingleOrDefault(); (Where user is the previous user) But it didn't help. Please note: After rerunning the application, user.Uploads does have the new upload! Did anyone experience this type of problem, or is it normal behavior? I am a newbie to this framework. I would gladly take any advice. Thank you!

    Read the article

  • Accessing both stored procedure output parameters AND the result set in Entity Framework?

    - by MS.
    Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model? I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set). Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler. I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches? Edit My code currently looks like public IEnumerable<FooBar> GetFooBars( int? param1, string param2, DateTime from, DateTime to, out DateTime? createdDate, out DateTime? deletedDate) { var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime)); var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime)); var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam); createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ? null : createdDateParam.Value); deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ? null : deletedDateParam.Value); return fooBars; }

    Read the article

  • How to read from multiple queues in real-world?

    - by Leon Cullens
    Here's a theoretical question: When I'm building an application using message queueing, I'm going to need multiple queues support different data types for different purposes. Let's assume I have 20 queues (e.g. one to create new users, one to process new orders, one to edit user settings, etc.). I'm going to deploy this to Windows Azure using the 'minimum' of 1 web role and 1 worker role. How does one read from all those 20 queues in a proper way? This is what I had in mind, but I have little or no real-world practical experience with this: Create a class that spawns 20 threads in the worker role 'main' class. Let each of these threads execute a method to poll a different queue, and let all those threads sleep between each poll (of course with a back-off mechanism that increases the sleep time). This leads to have 20 threads (or 21?), and 20 queues that are being actively polled, resulting in a lot of wasted messages (each time you poll an empty queue it's being billed as a message). How do you solve this problem?

    Read the article

  • How do I get my Windows Forms application to use a custom main function and get access to the Applic

    - by burble
    Hi Folks I am trying to use a Main () function in a class to control the program flow in my vb .net Windows Forms application. I have added a splash screen component and a login screen, and customised my main sdi form. I have set the startup form to be my main function in the Application Page of the Project Designer, and everything seems to work fine(ish). However, I would like to use: Me.MinimumSplashScreenDisplayTime = 5000 to ensure that the splash screen is visible, but it is not recognised by the system unless I tick the Enable Application Framework check box on the Project Designer. If I do this, on startup the program ignores the login and splash screens and all my customisation and just displays a default Form1, even though I have also specified my splash screen in the AF dropdown list. Of course, there are alternative ways to delay a splash screen, such as putting the thread temporarily to sleep (which didn't seem to work), but I suspect that there are other things in the AF that I may want to use. Any suggestions on how I can get round this please, and get a sensible means of controlling program flow? Any thoughts on the best overall structure for organising program flow would also be helpful too. I am concerned both about going down a Microsoft or an alternative custom route that may cause me problems later, as the application becomes more complex. Thankses.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Adding a column to a model at runtime (without additional tables) in rails

    - by Marek
    I'm trying to give admins of my web application the ability to add some new fields to a model. The model is called Artwork and i would like to add, for instante, a test_column column at runtime. I'm just teting, so i added a simple link to do it, it will be of course parametric. I managed to do it through migrations: def test_migration_create Artwork.add_column :test_column, :integer flash[:notice] = "Added Column test_column to artworks" redirect_to :action => 'index' end def test_migration_delete Artwork.remove_column :test_column flash[:notice] = "Removed column test_column from artworks" redirect_to :action => 'index' end It works, the column gets added/ removed to/from the databse without issues. I'm using active_scaffold at the moment, so i get the test_column field in the form without adding anything. When i submit a create or an update, however, the test_column does not get updated and stay empty. Inspecting the parameters, i can see: Parameters: {"commit"=>"Update", "authenticity_token"=>"37Bo5pT2jeoXtyY1HgkEdIhglhz8iQL0i3XAx7vu9H4=", "id"=>"62", "record"=>{"number"=>"test_artwork", "author"=>"", "title"=>"Opera di Test", "test_column"=>"TEEST", "year"=>"", "description"=>""}} the test_column parameter is passed correctly. So why active record keeps ignoring it? I tried to restart the server too without success. I'm using ruby 1.8.7, rails 2.3.5, and mongrel with an sqlite3 database. Thanks

    Read the article

  • What is the "right" way to make a modular ASP.NET page?

    - by Wayne Werner
    Hi, I'm writing an internal program using ASP.NET (Visual Studio 2008). The basic premise is that folks will connect to the page through their web browser, put in some data (which will be validated and sanitized of course), click "submit" and then a query will be run on a database. I told you that story so I can tell you this story - I want to make my program modular for the likelihood that this program will be updated later on, and probably extended. My initial thought is to compile the database handler as a dll and then use its functionality on my page (see my little diagram). But I can't figure out how to hook the dll into my page - My Google-fu has failed which leads me to my questions: 1) Is creating a dll the "right" solution? If so, continue to question 2, if not, what should I be doing? Any resources/tutorial links would be appreciated. 2) How do I attach a dll? Visual Studio tells me I can't Imports from an .aspx page, and I've tried <%@ Import namespace="MyDllName" % and a few other variations, none of which worked. Thanks!

    Read the article

  • What do you call a set of Javascript closures that share a common context?

    - by Ed Stauff
    I've been trying to learn closures (in Javascript), which kind of hurts my brain after way too many years with C# and C++. I think I now have a basic understanding, but one thing bothers me: I've visited lots of websites in this Quest for Knowledge, and nowhere have I seen a word (or even a simple two-word phrase) that means "a set of Javascript closures that share a common execution context". For example: function CreateThingy (name, initialValue) { var myName = name; var myValue = initialValue; var retObj = new Object; retObj.getName = function() { return myName; } retObj.getValue = function() { return myValue; } retObj.setValue = function(newValue) { myValue = newValue; } return retObj; }; From what I've read, this seems to be one common way of implementing data hiding. The value returned by CreateThingy is, of course, an object, but what would you call the set of functions which are properties of that object? Each one is a closure, but I'd like a name I can used to describe (and think about) all of them together as one conceptual entity, and I'd rather use a commonly accepted name than make one up. Thanks! -- Ed

    Read the article

  • Code management in different projects with different svn repositories

    - by uzay95
    First of all I want to tell you what kind of system I have and I want to build on. 1- A Solution (has) a- Shared Class Library project (which is for lots of different solutions) b- Another Class Library project (which is only for this solution) c- Web Application project (which main part of this solution) d- Shared Web Service project (which also serves for different solutions) 2- B Solution (has) a- Shared Class Library project (which is for lots of different solutions) c- Windows Form Application project (which is main part of this solution) d- Web Service project (which also serves for different solutions) and other projects like that.... I am using xp-dev.com as our svn repository server. And I opened different projects for these items (Shared Class Library, Web Service project, Windows Form Application project, Web Application project, Another Class Library project) . I want to do the versioning of all these projects of course. My first question is, should I put each project(one solution) to one svn repository to get their revision number later on? Or should I put each of them to different svn repository and keep( write down) their correct version number that is used to publish/deploy every solution? If I use one svn for each project(Shared Class Lib, Web App, Shared Web Service....) how can I relate the right svn address and version on VS.2010 within the real solution? So, how do you manage your repositories and projects?

    Read the article

  • Android lifecycle: Fill in data in activity in onStart() or onResume()?

    - by pjv
    Should you get data via a cursor and fill in the data on the screen, such as setting the window title, in onStart() or onResume()? onStart() would seem the logical place because after onStart() the Activity can already be displayed, albeit in the background. Notably I was having a problem with a managed dialog that made me rethink this. If the user rotates the screen while the dialog is still open, onCreateDialog() and onPrepareDialog() are called between onStart() and onResume(). If the dialog needs to be based on the data you need to have the data before onResume(). If I'm correct about onStart() then why does the Notepad example give a bad example by doing it in onResume()? See http://developer.android.com/resources/samples/NotePad/src/com/example/android/notepad/NoteEditor.html NoteEditor.java line 176 (title = mCursor.getString...). Also, what if my Activity launches another Actvity/Dialog that changes the data my cursor is tracking. Even in the simplest case, does that mean that I have to manually update my previous screen (a listener for a dialog in the main activity), or alternatively that I have to register a ContentObserver, since I'm no longer updating the data in onResume() (though I could update it twice of course)? I know it's a basic question but the dialog only recently, to my surprise, made me realize this.

    Read the article

  • how to make it easy for users to register at my site?

    - by rob
    I want to make it dirt simple for users coming to my site to register so they can post comments, vote on things, etc. I would like for them to be able to use their facebook id, twitter id, yahoo mail id, gmail id, AIM id, msn id, or whatever else people are likely to have (not necessarily all of those, but the more the better). I want my mom to be able to do it in 30 seconds or less. (that is, no "enter your open id url here" type thing that would confuse her). I prefer they not have to pick a unique name, as that gets annoying as the site gets more users and it gets hard to find one that is unique. What is the best option here? I'm not quite sure about OpenId vs. OAuth, and whether there are other options. And I'd like it to be as simple for me, the developer, as possible (of course!). I don't want to spend forever learning some protocol, nor have to structure my whole app around this. It would be great if there was a site with sample code that is pretty easy to drop in. BTW, StackOverflow is a good example of a site that was easy for me to register for.

    Read the article

  • .NET: bool vs enum as a method parameter

    - by Julien Lebosquain
    Each time I'm writing a method that takes a boolean parameter representing an option, I find myself thinking: "should I replace this by an enum which would make reading the method calls much easier?". Consider the following with an object that takes a parameter telling whether the implementation should use its thread-safe version or not (I'm not asking here if this way of doing this is good design or not, only the use of the boolean): public void CreateSomeObject(bool makeThreadSafe); CreateSomeObject(true); When the call is next to the declaration the purpose of the parameter seems of course obvious. When it's in some third party library you barely know, it's harder to immediately see what the code does, compared to: public enum CreationOptions { None, MakeThreadSafe } public void CreateSomeObject(CreationOptions options); CreateSomeObject(CreationOptions.MakeThreadSafe); which describes the intent far better. Things get worse when there's two boolean parameters representing options. See what happened to ObjectContext.SaveChanges(bool) between Framework 3.5 and 4.0. It has been obsoleted because a second option has been introduced and the whole thing has been converted to an enum. While it seems obvious to use an enumeration when there's three elements or more, what's your opinion and experiences about using an enum instead a boolean in these specific cases?

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >