Search Results

Search found 2988 results on 120 pages for 'the talking walnut'.

Page 97/120 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • How to convince someone, that reading programming related books(blogs, so..) is important? [closed]

    - by hgulyan
    Dear all, please, help me to convince, that no matter what you're doing, you need to read some stuff, try to learn something new. They say, that they don't want to sit in front of computer in the end of a day and they don't have opportunity to read in working hours, or they're too tired for doing something. Have you faced this kind of situation? What did you do? What if you want to help them? What methodology you'd suggest? How to open their eyes? EDIT I'm really concerned about this people. EDIT 2 Just to be clear, I'm not talking about one person or two. Some of them, just do their job good. Company doesn't motivate them to learn something. They're not bad people, not bad developers, they just need something or someone to help, show another view, but you can't just describe this new view or say smth like "You need to learn!" and that's it, you'll start to learn or you're not a good programmer. I started to learn OOP, DB structure 6 years ago and I had someone who had guided me. He told me to learn Java and MySQL, gave me some manuals and API's. That's how I started. What if they don't have that kind person or something else?

    Read the article

  • What programming hack from your past are you most ashamed of?

    - by LeopardSkinPillBoxHat
    We've all been there (usually when we are young and inexperienced). Fixing it properly is too difficult, too risky or too time-consuming. So you go down the hack path. Which hack from your past are you most ashamed of, and why? I'm talking about the ones where you would be really embarrassed if someone could attribute the hack to you (quite easily if you are using revision control software). One hack per answer please. Mine was shortly after I started in my first job. I was working on a legacy C system, and there was this strange defect where a screen view failed to update properly under certain circumstances. I wasn't familiar with how to use the debugger at this time, so I added traces into the code to figure out what was going on. Then I realised that the defect didn't occur anymore with the traces in the code. I slowly backed out the traces one-by-one, until I realised that only a single trace was required to make the problem go away. My logic now would tell me that I was dealing with some sort of race-condition or timing related issue that the trace just "hid under the rug". But I checked in the code with the following line, and all was well: printf(""); Which hacks are you ashamed of?

    Read the article

  • OO Design / Patterns - Fat Model Vs Transaction Script?

    - by ben
    Ok, 'Fat' Model and Transaction Script both solve design problems associated with where to keep business logic. I've done some research and popular thought says having all business logic encapsulated within the model is the way to go (mainly since Transaction Script can become really complex and often results in code duplication). However, how does this work if I want to use the TDG of a second Model in my business logic? Surely Transaction Script presents a neater, less coupled solution than using one Model inside the business logic of another? A practical example... I have two classes: User & Alert. When pushing User instances to the database (eg, creating new user accounts), there is a business rule that requires inserting some default Alerts records too (eg, a default 'welcome to the system' message etc). I see two options here: 1) Add this rule as a User method, and in the process create a dependency between User and Alert (or, at least, Alert's Table Data Gateway). 2) Use a Transaction Script, which avoids the dependency between models. (Also, means the business logic is kept in a 'neutral' class & easily accessible by Alert. That probably isn't too important here, though). User takes responsibility for it's own validation etc, however, but because we're talking about a business rule involving two Models, Transaction Script seems like a better choice to me. Anyone spot flaws with this approach?

    Read the article

  • How does browser know when to prompt user to save password?

    - by Eric
    This is related to the question I asked here: http://stackoverflow.com/questions/2382329/how-can-i-get-browser-to-prompt-to-save-password This is the problem: I CAN'T get my browser to prompt me to save the password for the site I'm developing. (I'm talking about the bar that appears sometimes when you submit a form on Firefox, that says "Remember the password for yoursite.com? Yes / Not now / Never") This is super frustrating because this feature of Firefox (and most other modern browsers, which I hope work in a similar fashion) seems to be a mystery. It's like a magic trick the browser does, where it looks at your code, or what you submit, or something, and if it "looks" like a login form with a username (or email address) field and a password field, it offers to save. Except in this case, where it's not offering my users that option after they use my login form, and it's making me nuts. :-) (I checked my Firefox settings-- I have NOT told the browser "never" for this site. It should be prompting.) My question: exactly what the heuristics are that Firefox (or any other modern browser) uses to know when it should prompt the user to save? This shouldn't be too difficult to answer, since it's right there in the Mozilla source (I don't know where to look or else I'd try to dig it out myself). You'd think there would be a blog post or some other similar developer note from the Mozilla developers about this but I can't find that either. (* Note that if your answer to me has anything to do with cookies, encryption or anything else that is about how I'm storing the user's passwords in the database, you've probably misread my question. :-)

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

  • Understanding CSRF - Simple Question

    - by byronh
    I know this might make me seem like an idiot, I've read everything there is to read about CSRF and I still don't understand how using a 'challenge token' would add any sort of prevention. Please help me clarify the basic concept, none of the articles and posts here on SO I read seemed to really explicitly state what value you're comparing with what. From OWASP: In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. If I understand the process correctly, this is what happens. I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission. But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend. Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.

    Read the article

  • "Work stealing" vs. "Work shrugging"?

    - by John
    Why is it that I can find lots of information on "work stealing" and nothing on "work shrugging" as a dynamic load-balancing strategy? By "work-shrugging" I mean busy processors pushing excessive work towards less loaded neighbours rather than idle processors pulling work from busy neighbours ("work-stealing"). I think the general scalability should be the same for both strategies. However I believe that it is much more efficient for busy processors to wake idle processors if and when there is definitely work for them to do than having idle processors spinning or waking periodically to speculatively poll all neighbours for possible work. Anyway a quick google didn't show up anything under the heading of "Work Shrugging" or similar so any pointers to prior-art and the jargon for this strategy would be welcome. Clarification/Confession In more detail:- By "Work Shrugging" I actually envisage the work submitting processor (which may or may not be the target processor) being responsible for looking around the immediate locality of the preferred target processor (based on data/code locality) to decide if a near neighbour should be given the new work instead because they don't have as much work to do. I am talking about an atomic read of the immediate (typically 2 to 4) neighbours' estimated q length here. I do not think this is any more coupling than implied by the thieves polling & stealing from their neighbours - just much less often - or rather - only when it makes economic sense to do so. (I am assuming "lock-free, almost wait-free" queue structures in both strategies). Thanks.

    Read the article

  • Test Views in ASP.NET MVC2 (ala RSpec)

    - by Dmitriy Nagirnyak
    Hi, I am really missing heavily the ability to test Views independently of controllers. The way RSpec does it. What I want to do is to perform assertions on the rendered view (where no controller is involved!). In order to do so I should provide required Model, ViewData and maybe some details from HttpContextBase (when will we get rid of HttpContext!). So far I have not found anything that allows doing it. Also it might heavily depend on the ViewEngine being used. List of things that views might contain are: Partial views (may be nested deeply). Master pages (or similar in other view engines). Html helpers generating links and other elements. Generally almost anything in a range of common sense :) . Also please note that I am not talking about client-side testing and thus Selenium is just not related to it at all. It is just plain .NET testing. So are there any options to actually do the testing of views? Thanks, Dmitriy.

    Read the article

  • Capturing time intervals when somebody was online? How would you impement this feature?

    - by Kirzilla
    Hello, Our aim is to build timelines saying about periods of time when user was online. (It really doesn't matter what user we are talking about and where he was online) To get information about onliners we can call API method, someservice.com/api/?call=whoIsOnline whoIsOnline method will give us a list of users currently online. But there is no API method to get information about who IS NOT online. So, we should build our timelines using information we got from whoIsOnline. Of course there will be a measurement error (we can't track information in realtime). Let's suppose that we will call whoIsOnline method every 2 minutes (yes, we will run our script by cron every 2 minutes). For example, calling whoIsOnline at 08:00 will return Peter_id Michal_id Andy_id calling whoIsOnline at 08:02 will return Michael_id Andy_id George_id As you can see, Peter has gone offline, but we have new onliner - George. Available instruments are Db(MySQL) / text files / key-value storage (Redis/memcache); feel free to choose any of them (or even all of them). So, we have to get information like this George_id was online... 12 May: 08:02-08:30, 12:40-12:46, 20:14-22:36 11 May: 09:10-12:30, 21:45-23:00 10 May: was not online And now question... How would you store information to implement such timelines? How would you query/calculate information about periods of time when user was online? Additional information.. You cannot update information about offline users, only users who are "currently" online. Solution should be flexible: timeline information could be represented relating to any timezone. We should keep information only for last 7 days. Every user seen online is automatically getting his own identifier in our database. Uff.. it was really hard for me to write it because my English is pretty bad, but I hope my question will be clear for you. Thank you.

    Read the article

  • How can I use "Dependency Injection" in simple php functions, and should I bother?

    - by Tchalvak
    I hear people talking about dependency injection and the benefit of it all the time, but I don't really understand it. I'm wondering if it's a solution to the "I pass database connections as arguments all the time" problem. I tried reading wikipedia's entry on it, but the example is written in Java so I don't solidly understand the difference it is trying to make clear. ( http://en.wikipedia.org/wiki/Dependency_injection ). I read this dependency-injection-in-php article ( http://www.potstuck.com/2009/01/08/php-dependency-injection/ ), and it seems like the objective is to not pass dependencies to an object directly, but to cordon off the creation of an object along with the creation of it's dependencies. I'm not sure how to apply that in a using php functions context, though. Additionally, is the following Dependency Injection, and should I bother trying to do dependency injection in a functional context? Version 1: (the kind of code that I create, but don't like, every day) function get_data_from_database($database_connection){ $data = $database_connection->query('blah'); return $data; } Version 2: (don't have to pass a database connection, but perhaps not dependency injection?) function get_database_connection(){ static $db_connection; if($db_connection){ return $db_connection; } else { // create db_connection ... } } function get_data_from_database(){ $conn = get_database_connection(); $data = $conn->query('blah'); return $data; } $data = get_data_from_database(); Version 3: (the creation of the "object"/data is separate, and the database code is still, so perhaps this would count as dependency injection?) function factory_of_data_set(){ static $db_connection; $data_set = null; $db_connection = get_database_connection(); $data_set = $db_connection->query('blah'); return $data_set; } $data = factory_of_data_set(); Anyone have a good resource or just insight that makes the method and benefit -crystal- clear?

    Read the article

  • Copying a directory that is version controlled

    - by ibz
    I am curious whether it is OK to copy a directory that is under version control and start working on both copies. I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases. I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy. However, if we talk about the DVCS world, things might be even more unclear, since every working copy is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question. Later edit: Some people asked why I would want to do that. Here is the whole story: In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened. In the bzr case, I am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though.

    Read the article

  • why unsigned int 0xFFFFFFFF is equal to int -1?

    - by conejoroy
    perhaps it's a very stupid question but I'm having a hard time figuring this out =) in C or C++ it is said that the maximum number a size_t (an unsigned int data type) can hold is the same as casting -1 to that data type. for example see http://stackoverflow.com/questions/1420982/invalid-value-for-sizet Why?? I'm confused.. I mean, (talking about 32 bit ints) AFAIK the most significant bit holds the sign in a signed data type (that is, bit 0x80000000 to form a negative number). then, 1 is 0x00000001.. 0x7FFFFFFFF is the greatest positive number a int data type can hold. then, AFAIK the binary representation of -1 int should be 0x80000001 (perhaps I'm wrong). why/how this binary value is converted to anything completely different (0xFFFFFFFF) when casting ints to unsigned?? or.. how is it possible to form a binary -1 out of 0xFFFFFFFF? I have no doubt that in C: ((unsigned int)-1) == 0xFFFFFFFF or ((int)0xFFFFFFFF) == -1 is equally true than 1 + 1 == 2, I'm just wondering why. thanks!

    Read the article

  • Why is cell phone software is still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up?

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Are short tags *that* bad?

    - by Col. Shrapnel
    Everyone here on SO says it's bad, for 3 main reasons: XML is used everywhere You can't control where your script is going to be run short tags are going to be removed in PHP6 But let's see closer at them: Last one is easy - it's just not true. XML is not really a problem. If you want to use short tags, it won't be a problem for you to write a single tag using PHP echo, <?="<?XML...?>"?>. Anyway, why not to leave such a trifle thing for one's own judgment? PHP configuration access. Oh yes, the only thing that can be really considered as a reason. Of course in case you plan to spread your code wide. But if you don't? I think most of scripts being written not for the wide spread, but just for one place. If you can use short tags in that place - why to abandon them? Anyway, it's the only template we are talking about. If you don't like short tags, PHP native template is probably not for you, why not to try Smarty then? Well, the question is: is there any other real reasons to abandon short tags and make it strict recommendation? Or, as it was said, better to leave short tags alone?

    Read the article

  • Good mobile oriented GWT widget library alternatives

    - by Michael Donohue
    I've been developing a travel planning site - tripgrep.com - which is built on appengine, GWT and smartgwt, among other technologies. It is still early days, and the site is now working well on my development environment, which is either a windows or mac computer. However, I am frequently talking up the website to my friends when we are at a bar or other venue, so I am standing there while they try to access the site via an iPhone, Android or Blackberry - I've witnessed all three. It has been painfully obvious that the browser based frontend takes a long time to download on a mobile device. I am pretty sure this is because of the javascript download for SmartGWT. So, I would like to look at alternatives to SmartGWT. What I like about SmartGWT is that it has a reasonable look and feel out of the box - I don't need to learn any design or css and it has an office application look. This is considerably better than the GWT built-in widgets, which just get a blue border. The better look-and-feel is why I went with SmartGWT early on. However, the slow load times are killing me on these mobile demos. So now I want a fast loading widget alternative that has good look-and-feel out of the box. The features I care about are: tabs, good form layout, Google maps API integration, grid data viewing. If those are all available in a library that loads quickly on a mobile device, then that's the library I want.

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

  • Hotkeys in webapps

    - by Johoo
    When creating webapps, is there any guidelines on which keys you can use for your own hotkeys without overriding too many of the browsers default hotkeys. For example i might want to have a custom copy command for copying entire sets of data that only makes sense for my program instead of just text. The logical combination for this would be ctrl+c but that would destroy the default copy hotkey for normal text. One solution i was thinking about is to only catch the hotkey when it "makes sense" but when you use some advanced custom selection it might be hard to differentiate if your data is focused, if text is selected or both. Right now i am only using single keys as the hotkey, so just 'c' for the example above and this seems to be what most other sites are doing too. The problem is that if you have text input this doesn't work so good. Is this the best solution? To clarify I'm talking about advanced webapps that behave more like normal programs and not just some website presenting information(even though i think these guidlines would be valid for both cases). So for the copy example it might not be a big deal if you can't copy the text in the menu but when ctrl+tab, alt+d or ctrl+e doesn't work i would be really pissed, cough flash cough.

    Read the article

  • NHibernate IQueryable Collection as Property of Root

    - by Khalid Abuhakmeh
    Hello and thank you for taking the time to read this. I have a root object that has a property that is a collection. For example : I have a Shelf object that has Books. // now public class Shelf { public ICollection<Book> Books {get; set;} } // want public class Shelf { public IQueryable<Book> Books {get;set;} } What I want to accomplish is to return a collection that is IQueryable so that I can run paging and filtering off of the collection directly from the the parent. var shelf = shelfRepository.Get(1); var filtered = from book in shelf.Books where book.Name == "The Great Gatsby" select book; I want to have that query executed specifically by NHibernate and not a get all to load a whole collection and then parse it in memory (which is what currently happens when I use ICollection). The reasoning behind this is that my collection could be huge, tens of thousands of records, and a get all query could bash my database. I would like to do this implicitly so that when NHibernate sees and IQueryable on my class it knows what to do. I have looked at NHibernates Linq provider and currently I am making the decision to take large collections and split them into their own repository so that I can make explicit calls for filtering and paging. Linq To SQL offers something similar to what I'm talking about.

    Read the article

  • Calling a method on an unitialized object (null pointer)

    - by Florin
    What is the normal behavior in Objective-C if you call a method on an object (pointer) that is nil (maybe because someone forgot to initialize it)? Shouldn't it generate some kind of an error (segmentation fault, null pointer exception...)? If this is normal behavior, is there a way of changing this behavior (by configuring the compiler) so that the program raises some kind of error / exception at runtime? To make it more clear what I am talking about, here's an example. Having this class: @interface Person : NSObject { NSString *name; } @property (nonatomic, retain) NSString *name; - (void)sayHi; @end with this implementation: @implementation Person @synthesize name; - (void)dealloc { [name release]; [super dealloc]; } - (void)sayHi { NSLog(@"Hello"); NSLog(@"My name is %@.", name); } @end Somewhere in the program I do this: Person *person = nil; //person = [[Person alloc] init]; // let's say I comment this line person.name = @"Mike"; // shouldn't I get an error here? [person sayHi]; // and here [person release]; // and here

    Read the article

  • Ignore LD_LIBRARY_PATH and stick with library given through -rpath at link time

    - by roe
    I'm sitting in an environment which I have no real control over (it's not just me, so basically, I can't change the environment or it won't work for anyone else), the only thing I can affect is how the binary is built. My problem is, the environment specifies an LD_LIBRARY_PATH containing a libstdc++ which is not compatible with the compiler being used. I tried compiling it statically, but that doesn't seem possible for g++ (version 4.2.3, seems to have been work done in this direction in later versions though which are not available, -static-libstdc++ or something like that). Now I've arrived at using rpath to bake the absolute path name into the executable (would work, all machines it's supposed to run on are identical). Unfortunately it appears as though LD_LIBRARY_PATH takes precedence over rpath (resetting LD_LIBRARY_PATH confirmed that it's able to find the library, but as stated above, LD_LIBRARY_PATH will be set for everyone, and I cannot change that). Is there any way I can make rpath take precedence over LD_LIBRARY_PATH, or are there any other possible solutions to my problem? Note that I'm talking about dynamic linking at runtime, I am able to control the command line at compile and link time. Thanks.

    Read the article

  • What does flushing thread local memory to global memory mean?

    - by Jack Griffith
    Hi, I am aware that the purpose of volatile variables in Java is that writes to such variables are immediately visible to other threads. I am also aware that one of the effects of a synchronized block is to flush thread-local memory to global memory. I have never fully understood the references to 'thread-local' memory in this context. I understand that data which only exists on the stack is thread-local, but when talking about objects on the heap my understanding becomes hazy. I was hoping that to get comments on the following points: When executing on a machine with multiple processors, does flushing thread-local memory simply refer to the flushing of the CPU cache into RAM? When executing on a uniprocessor machine, does this mean anything at all? If it is possible for the heap to have the same variable at two different memory locations (each accessed by a different thread), under what circumstances would this arise? What implications does this have to garbage collection? How aggressively do VMs do this kind of thing? Overall, I think am trying to understand whether thread-local means memory that is physically accessible by only one CPU or if there is logical thread-local heap partitioning done by the VM? Any links to presentations or documentation would be immensely helpful. I have spent time researching this, and although I have found lots of nice literature, I haven't been able to satisfy my curiosity regarding the different situations & definitions of thread-local memory. Thanks very much.

    Read the article

  • Automatic .NET code, nhibernate session, and LINQ datacontext clean-up?

    - by AverageJoe719
    Hi all, in my goal to adopt better coding practices I have a few questions in general about automatic handling of code. I have heard different answers both from online and talking with other developers/programmers at my work. I am not sure if I should have split them into 3 questions, but they all seem sort of related: 1) How does .NET handle instances of classes and other code things that take up memory? I recently found out about using the factory pattern for certain things like service classes so that they are only instantiated once in the entire application, but then I was told that '.NET handles a lot of that stuff automatically when mentioning it.' 2) How does Nhibernate's session handle automatic clean-up of un-used things? I've seen some say that it is great at handling things automatically and you should just use a session factory and that's it, no need to close it. But I have also read and seem many examples where people close the hibernate session. 3) How does LINQ's datacontext handle this? Most of the time I never .disposed my datacontext's and the app didn't see to take a performance hit (though I am not running anything super intensively), but it seems like most people recommend disposing of your datacontext after you are done with it. However, I have seen many many code examples where the dispose method is never called. Also in general I found it kind of annoying that you couldn't access even one-deep child related objects after disposing of the datacontext unless you explicity also grabbed them in the query. Thanks all. I am loving this site so far, I kind of get lost and spend hours just reading things on here. =)

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >