Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 399/465 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • Receiving DB update events in .NET from SQLite

    - by Dan Tao
    I've recently discovered the awesomeness of SQLite, specifically the .NET wrapper for SQLite at http://sqlite.phxsoftware.com/. Now, suppose I'm developing software that will be running on multiple machines on the same network. Nothing crazy, probably only 5 or 6 machines. And each of these instances of the software will be accessing an SQLite database stored in a file in a shared directory (is this a bad idea? If so, tell me!). Is there a way for each instance of the app to be notifiied if one instance updates the database file? One obvious way would be to use the FileSystemWatcher class, read the entire database into a DataSet, and then ... you know ... enumerate through the entire thing to see what's new ... but yeah, that seems pretty idiotic, actually. Is there such a thing as a provider of SQLite updates? Does this even make sense as a question? I'm also pretty much a newbie when it comes to ADO.NET, so I might be approaching the problem from the entirely wrong angle.

    Read the article

  • Safe to update separate regions of a BufferedImage in separate threads?

    - by finnw
    I have a collection of BufferedImage instances, one main image and some subimages created by calling getSubImage on the main image. The subimages do not overlap. I am also making modifications to the subimage and I want to split this into multiple threads, one per subimage. From my understanding of how BufferedImage, Raster and DataBuffer work, this should be safe because: Each instance of BufferedImage (and its respective WritableRaster) is accessed from only one thread. The shared ColorModel is immutable The DataBuffer has no fields that can be modified (the only thing that can change is elements of the backing array.) Modifying disjoint segments of an array in separate threads is safe. However I cannot find anything in the documentation that says that it is definitely safe to do this. Can I assume it is safe? I know that it is possible to work on copies of the child Rasters but I would prefer to avoid this because of memory constraints. Otherwise, is it possible to make the operation thread-safe without copying regions of the parent image?

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • Service php-fpm does not support chkconfig

    - by ychian
    Everything is working fine. Just that when i chkconfig –add php-fpm It throws me an error Service php-fpm does not support chkconfig php-5.2.13 php-5.2.13-fpm-0.5.13.diff.gz Below is the configuration i use ./configure --enable-fastcgi --enable-fpm --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --with-pear --with-bz2 --with-curl --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --enable-gd-native-ttf --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-png --with-expat-dir=/usr --with-pcre-regex=/usr --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --enable-sysvsem --enable-sysvshm --enable-sysvmsg --enable-track-vars --enable-trans-sid --enable-yp --enable-wddx --with-kerberos --enable-ucd-snmp-hack --with-unixODBC=shared,/usr --enable-memory-limit --enable-shmop --enable-calendar --enable-dbx --enable-dio --with-mime-magic=/usr/share/file/magic.mime --without-sqlite --with-libxml-dir=/usr --with-xml --with-system-tzdata --without-mysql --without-gd --without-odbc --disable-dom --disable-dba --without-unixODBC --disable-pdo --disable-xmlreader --disable-xmlwriter

    Read the article

  • How to implement Master-Detail with Multi-Selection in WPF?

    - by gehho
    Hi, I plan to create a typical Master-Detail scenario, i.e. a collection of items displayed in a ListView via DataBinding to an ICollectionView, and details about the selected item in a separate group of controls (TextBoxes, NumUpDowns...). No problem so far, actually I have already implemented a pretty similar scenario in an older project. However, it should be possible to select multiple items in the ListView and get the appropriate shared values displayed in the detail view. This means, if all selected items have the same value for a property, this value should be displayed in the detail view. If they do not share the same value, the corresponding control should provide some visual clue for the user indicating this, and no value should be displayed (or an "undefined" state in a CheckBox for example). Now, if the user edits the value, this change should be applied to all selected items. Further requirements are: MVVM compatibility (i.e. not too much code-behind) Extendability (new properties/types can be added later on) Does anyone have experience with such a scenario? Actually, I think this should be a very common scenario. However, I could not find any details on that topic anywhere. Thanks! gehho. PS: In the older project mentioned above, I had a solution using a subclass of the ViewModel which handles the special case of multi-selection. It checked all selected items for equality and returned the appropriate values. However, this approach had some drawbacks and somehow seemed like a hack because (besides other smelly things) it was necessary to break the synchronization between the ListView and the detail view and handle it manually.

    Read the article

  • SQLite assembly not copied to output folder for unit testing

    - by Groo
    Problem: SQLite assembly referenced in my DAL assembly does not get copied to the output folder when doing unit tests (Copy local is set to true). I am working on a .Net 3.5 app in VS2008, with NHibernate & SQLite in my DAL. Data access is exposed through the IRepository interface (repository factory) to other layers, so there is no need to reference NHibernate or the System.Data.SQLite assemblies in other layers. For unit testing, there is a public factory method (also in my DAL) which creates an in-memory SQLite session and creates a new IRepository implementation. This is also done to avoid have a shared SQLite in-memory config for all assemblies which need it, and to avoid referencing those DAL internal assemblies. The problem is when I run unit tests which reside a separate project - if I don't add System.Data.SQLite as a reference to the unit test project, it doesn't get copied to the TestResults...\Out folder (although this project references my DAL project, which references System.Data.SQLite, which has its Copy local property set to true), so the tests fail while NHibernate is being configured. If I add the reference to my testing project, then it does get copied and unit tests work. What am I doing wrong?

    Read the article

  • Visual Studio 2005 to VS 2008

    - by Adi
    hi all, I am a newbie in working on VS IDE and have not much experience in how the different libraries and files are linked in it. I have to build a OpenCV project which was made in VS2005 by one of my colleagues into VS2008. The project is for blob detection. Following is what he has to say in readme : Steps to use the library (using MSVC++ sp 5): 1 - open the project of the library and build it 2 - in the project where the library should be used, add: 2.1 In "Project/Settings/C++/Preprocessor/Additional Include directories" add the directory where the blob library is stored 2.2 In "Project/Settings/Link/Input/Additional library path" add the directory where the blob library is stored and in "Object/Library modules" add the cvblobslib.lib file 3- Include the file "BlobResult.h" where you want to use blob variables. 4- To see an example on using the blob library, see the file example.txt inside the zip file. NOTE: Verify that in the project where the cvblobslib.lib is used, the MFC Runtime Libraries are not mixed: Check in "Project-Settings-C/C++-Code Generation-Use run-time library" of your project and set it to Debug Multithreaded DLL (debug version ) or to Multithreaded DLL ( release version ). 2 Check in "Project-Settings-General" how it uses the MFC. It should be "Use MFC in a shared DLL". NOTE: The library can be compiled and used in .NET using this steps, but the menu options may differ a little NOTE2: In the .NET version, the character sets must be equal in the .lib and in the project. [OpenCV yahoo group: Msg 35500] Can anyone explain me , how to go about in doing this in VS2008. I would also appreciate if someone can explain me how the different libraries are linked , what is Debug, What is Release and all in a Visual Studio project folder we have.\ Thanks in advance Aditya

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • RedirectToAction and validate MVC 2

    - by Dan
    Hi, my problem is the View where the user typed, the validation. I have to take RedirectToAction on the site because on the site upload a file. Thats my code. My model class public class Person { [Required(ErrorMessage= "Please enter name")] public string name { get; set; } } My View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcWebRole1.Models.Person>" %> Name <h2>Information Data</h2> <%= Html.ValidationSummary() %> <%using (Html.BeginForm ("upload","Home", FormMethod.Post, new{ enctype ="multipart/form-data" })) {%> <fieldset> <legend>Fields</legend> <p> <label for="name">name:</label> <%= Html.TextBox("name") %> <%= Html.ValidationMessage("name", "*") %> </p> </fieldset> <% } %> and the Controller [AcceptVerbs(HttpVerbs.Post)] public ActionResult upload(FormCollection form) { Person lastname = new Person(); lastname.name = form["name"]; return RedirectToAction("Index"); } Thx for answer my question In advance

    Read the article

  • AutoCompleteExtender not suggesting search terms

    - by Phil
    My codebehind method: <System.Web.Services.WebMethodAttribute(), System.Web.Script.Services.ScriptMethodAttribute()> _ Public Shared Function GetCompletionList(ByVal prefixText As String, ByVal count As Integer, ByVal contextKey As String) As String() ' Create array of movies Dim movies() As String = {"Star Wars", "Star Trek", "Superman", "Memento", "Shrek", "Shrek II"} ' Return matching movies Return From m In movies Where (m.StartsWith(prefixText, StringComparison.CurrentCultureIgnoreCase)) Select m _ .Take(count).ToArray() End Function End Class Then in my aspx page I have: <form id="form1" runat="server"> <div> <asp:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server"> </asp:ToolkitScriptManager> <br /> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:AutoCompleteExtender ID="TextBox1_AutoCompleteExtender" runat="server" DelimiterCharacters="" Enabled="True" ServiceMethod="GetCompletionList" ServicePath="" TargetControlID="TextBox1" UseContextKey="True" MinimumPrefixLength="2"> </asp:AutoCompleteExtender> </div> </form> When I run the page there are no errors, but there also are no auto complete suggestions. Please help!

    Read the article

  • Synchronizing ASP.NET MVC action methods with ReaderWriterLockSlim

    - by James D
    Any obvious issues/problems/gotchas with synchronizing access (in an ASP.NET MVC blogging engine) to a shared object model (NHibernate, but it could be anything) at the Controller/Action level via ReaderWriterLockSlim? (Assume the object model is very large and expensive to build per-request, so we need to share it among requests.) Here's how a typical "Read Post" action would look. Enter the read lock, do some work, exit the read lock. public ActionResult ReadPost(int id) { // ReaderWriterLockSlim allows multiple concurrent writes; this method // only blocks in the unlikely event that some other client is currently // writing to the model, which would only happen if a comment were being // submitted or a new post were being saved. _lock.EnterReadLock(); try { // Access the model, fetch the post with specificied id // Pseudocode, etc. Post p = TheObjectModel.GetPostByID(id); ActionResult ar = View(p); return ar; } finally { // Under all code paths, we must release the read lock _lock.ExitReadLock(); } } Meanwhile, if a user submits a comment or an author authors a new post, they're going to need write access to the model, which is done roughly like so: [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveComment(/* some posted data */) { // try/finally omitted for brevity _lock.EnterWriteLock(); // Save the comment to the DB, update the model to include the comment, etc. _lock.ExitWriteLock(); } Of course, this could also be done by tagging those action methods with some sort of "synchronized" attribute... but however you do it, my question is is this a bad idea? ps. ReaderWriterLockSlim is optimized for multiple concurrent reads, and only blocks if the write lock is held. Since writes are so infrequent (1000s or 10,000s or 100,000s of reads for every 1 write), and since they're of such a short duration, the effect is that the model is synchronized , and almost nobody ever locks, and if they do, it's not for very long.

    Read the article

  • How to use references, avoid header bloat, and delay initialization?

    - by Kyle
    I was browsing for an alternative to using so many shared_ptrs, and found an excellent reply in a comment section: Do you really need shared ownership? If you stop and think for a few minutes, I'm sure you can pinpoint one owner of the object, and a number of users of it, that will only ever use it during the owner's lifetime. So simply make it a local/member object of the owners, and pass references to those who need to use it. I would love to do this, but the problem becomes that the definition of the owning object now needs the owned object to be fully defined first. For example, say I have the following in FooManager.h: class Foo; class FooManager { shared_ptr<Foo> foo; shared_ptr<Foo> getFoo() { return foo; } }; Now, taking the advice above, FooManager.h becomes: #include "Foo.h" class FooManager { Foo foo; Foo& getFoo() { return foo; } }; I have two issues with this. First, FooManager.h is no longer lightweight. Every cpp file that includes it now needs to compile Foo.h as well. Second, I no longer get to choose when foo is initialized. It must be initialized simultaneously with FooManager. How do I get around these issues?

    Read the article

  • Connecting to a network drive programmatically and caching credentials

    - by Chris Doggett
    I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user. The solution I've found so far is to make a batch file with the following: explorer \\MACHINE1 explorer \\MACHINE2 explorer \\MACHINE3 Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires. I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached? Is there an app that does something like this automatically? Unfortunately, domain authentication via the VPN isn't an option, according to our admin. EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.

    Read the article

  • Send post request from client to node.js

    - by Husar
    In order to learn node.js I have built a very simple guestbook app. There is basically just a comment form and a list of previous comments. Currently the app is client side only and the items are stored within local storage. What I want to do is send the items to node where I will save them using Mongo DB. The problem is I have not yet found a way to establish a connection to send data back and forth the client and node.js using POST requests. What I do now is add listeners to the request and wait for data I send: request.addListener('data', function(chunk) { console.log("Received POST data chunk '"+ chunk + "'."); }); On the client side I send the data using a simple AJAX request: $.ajax({ url: '/', type: 'post', dataType: 'json', data: 'test' }) This does not work at all in them moment. It could be that I don't know what url to place in the AJAX request 'url' parameter. Or the whole thing might just be the build using the wrong approach. I have also tried implementing the method described here, but also with no success. It would really help if anyone can share some tips on how to make this work (sending POST request from the client side to node and back) or share any good tutorials. thanks.

    Read the article

  • Rails: Create method available in all views and all models

    - by smotchkkiss
    I'd like to define a method that is available in both my views and my models Say I have a view helper: def foo(s) "hello #{s}" end A view might use the helper like this: <div class="data"><%= foo(@user.name) %></div> However, this <div> will be updated with a repeating ajax call. I'm using a to_json call in a controller returns data like so: render :text => @item.to_json(:only => [...], :methods => [:foo]) This means, that I have to have foo defined in my Item model as well: class Item def foo "hello #{name}" end end It'd be nice if I could have a DRY method that could be shared in both my views and my models. Usage might look like this: Helper def say_hello(s) "hello #{s}" end User.rb model def foo say_hello(name) end Item.rb model def foo say_hello(label) end View <div class="data"><%= item.foo %></div> Controller def observe @items = item.find(...) render :text => @items.to_json(:only=>[...], :methods=>[:foo]) end IF I'M DUMB, please let me know. I don't know the best way to handle this, but I don't want to completely go against best-practices here. If you can think of a better way, I'm eager to learn!

    Read the article

  • Flex-built SWF's no longer work, error 2048, 2046, 2032

    - by Kevin
    I'm really confused about this problem, and I'm pretty new to Flex. Basically, anything I try to build with mxmlc fails to run now, giving me the above three errors depending on what I do. It was working 30 minutes ago, I've been spending that time trying to figure out what has changed. I redownloaded the Flex SDK, cleared my assetcache, have cleared Firefox's cache. (I'm using Linux.) Even if I compile with -static-link-runtime-shared-libraries=false, since it seems like #2048 is a RSL problem, it still refuses to run. Another strange thing, if I keep <policy-file-url>http://fpdownload.adobe.com/pub/swz/crossdomain.xml</policy-file-url> <rsl-url>textLayout_1.0.0.595.swz</rsl-url> in my flex-config file, then firebug tells me that my swf file is trying to access a copy of that in the app's folder, giving error 2032. And if I stick the one I have in frameworks/rsls/ then it gives me error 2046. I don't know how it could not be properly signed, unless Adobe magically changed a signature and didn't update their flex SDK. Any help will be appreciated.

    Read the article

  • iPhone programming - problem with CoreFoundation forking

    - by Tom
    Hello all, I've been working on an iPhone for several months. It's a 2d shooting game akin to the old Smash TV type games. I'm doing everything alone and it has come out well so far, but now I am getting unpredictable crashes which seem to be related to CoreFoundation forking and not exec()ing, as the message __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONA LITY___YOU_MUST_EXEC__ always shows up somewhere in the debugger. Usually it shows up around a CFRunLoopRunSpecific and is related to either a timer firing or _InitializeTouchTapCount. I cannot figure out exactly what is causing the fork to occur. My main game loop is running on a timer, first updating all the logic and then drawing everything with openGL. There is nothing highly complex or unusual. I understand you cannot make CF calls on the childside of a fork, or access shared memory and things like that. I am not explicitly trying to fork anything. My question is: can anyone tell me what type of activity might cause CoreFoundation to randomly fork like this? I'd really like to finish this game and I don't know how to solve this problem. Thanks for any help.

    Read the article

  • Is there a standard mapping between JSON and Protocol Buffers?

    - by Daniel Earwicker
    From a comment on the announcement blog post: Regarding JSON: JSON is structured similarly to Protocol Buffers, but protocol buffer binary format is still smaller and faster to encode. JSON makes a great text encoding for protocol buffers, though -- it's trivial to write an encoder/decoder that converts arbitrary protocol messages to and from JSON, using protobuf reflection. This is a good way to communicate with AJAX apps, since making the user download a full protobuf decoder when they visit your page might be too much. It may be trivial to cook up a mapping, but is there a single "obvious" mapping between the two that any two separate dev teams would naturally settle on? If two products supported PB data and could interoperate because they shared the same .proto spec, I wonder if they would still be able to interoperate if they independently introduced a JSON reflection of the same spec. There might be some arbitrary decisions to be made, e.g. should enum values be represented by a string (to be human-readable a la typical JSON) or by their integer value? So is there an established mapping, and any open source implementations for generating JSON encoder/decoders from .proto specs?

    Read the article

  • DropDownListFor and relating my lambda to my ViewModel

    - by Daniel Harvey
    After googling for a while I'm still drawing a blank here. I'm trying to use a ViewModel to pull and provide a dictionary to a drop down list inside a strongly typed View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="Notebook.ViewModels.CorporationJoinViewModel" %> ... <%: Html.DropDownListFor(c => c.CorpDictionary.Keys, new SelectList(Model.CorpDictionary, "Value", "Key"))%> I'm getting the error CS1061: 'object' does not contain a definition for 'CorpDictionary' and no extension method 'CorpDictionary' accepting a first argument of type 'object' could be found and the relevant bit of my ViewModel public class CorporationJoinViewModel { DB _db = new DB(); // data context public Dictionary<int, string> CorpDictionary { get { Dictionary<int, string> corporations = new Dictionary<int, string>(); int x = 0; foreach (Corporation corp in _db.Corporations) { corporations.Add(x, corp.name); } return corporations; } } I'll admit i have a pretty magical understanding of how linq is finding my ViewModel object from that lambda, and the error message is making me think it's not. Is my problem the method I'm using to pass the data? What am I missing here?

    Read the article

  • Using git pull to track a remote branch without merging

    - by J Barlow
    I am using git to track content which is changed by some people and shared "read-only" with others. The "readers" may from time to time need to make a change, but mostly they will not. I want to allow for the git "writers" to rebase pushed branches** if need be, and ensure that the "readers" never accidentally get a merge. That's normally easy enough. git pull origin +master There's one case that seems to cause problems. If a reader makes a local change, the command above will merge. I want pull to be fully automatic if the reader has not made local changes, while if they have made local changes, it should stop and ask for input. I want to track any upstream changes while being careful about merging downstream changes. In a way, I don't really want to pull. I want to track the master branch exactly. ** (I know this is not a best practice, but it seems necessary in our case: we have one main branch that contains most of the work and some topic branches for specific customers with minor changes that need to be isolated. It seems easiest to frequently rebase to keep the topics up to date.)

    Read the article

  • Class; Struct; Enum confusion, what is better?

    - by Angel Brighteyes
    I have 46 rows of information, 2 columns each row ("Code Number", "Description"). These codes are returned to the client dependent upon the success or failure of their initial submission request. I do not want to use a database file (csv, sqlite, etc) for the storage/access. The closest type that I can think of for how I want these codes to be shown to the client is the exception class. Correct me if I'm wrong, but from what I can tell enums do not allow strings, though this sort of structure seemed the better option initially based on how it works (e.g. 100 = "missing name in request"). Thinking about it, creating a class might be the best modus operandi. However I would appreciate more experienced advice or direction and input from those who might have been in a similar situation. Currently this is what I have: class ReturnCode { private int _code; private string _message; public ReturnCode(int code) { Code = code; } public int Code { get { return _code; } set { _code = value; _message = RetrieveMessage(value); } } public string Message { get { return _message; } } private string RetrieveMessage(int value) { string message; switch (value) { case 100: message = "Request completed successfuly"; break; case 201: message = "Missing name in request."; break; default: message = "Unexpected failure, please email for support"; break; } return message; } }

    Read the article

  • What am I missing in IIS7?

    - by faded19
    Hello All, Ok here is my dilemma, I have been developing on a shared host at discountasp.net (IIS 6)for some time now. All was going well, however now that app is complete we are moving it to its own dedicated server which is now server 2008 and IIS 7. I am currently using asp forms authentication (which again seems to work just fine on IIS6) The issue seems to occur after I click login, it pops the "Signing In" box..an error then arises in the JavaScript of Membership.js "Object Does not Support Membership.js" I verified that the code was making it to: membership.BeginLogin(uid, pwd, rememberme); and was in fact passing the correct variables. Another odd thing I noticed when setting the forms permissions is that when I went to select Users or Roles within the IIS 7 management console it would take forever, and then time out with the following error: A Network related or instance specific error occurred while establishing a connection to SQL Server. The server was not or was not accessible, verify that the instance name is correct and that SQL Server is configured to allow remote connections (provider - named pipes provider: error 40 - could not open a connection to SQL Server.) I am rather weak on the hardware/configure side of the house so I am not really sure what the issue is, it’s almost as if IIS7 cannot see the DB. They both reside on the same server however. If anyone could help point me in the right direction as to how to resolve this I would be eternally grateful! Thanks in advance Bryan

    Read the article

  • ASP.MVC 2 RTM + ModelState Error at Id property

    - by Zote
    I have this classes: public class GroupMetadata { [HiddenInput(DisplayValue = false)] public int Id { get; set; } [Required] public string Name { get; set; } } [MetadataType(typeof(GrupoMetadata))] public partial class Group { public virtual int Id { get; set; } public virtual string Name { get; set; } } And this action: [HttpPost] public ActionResult Edit(Group group) { if (ModelState.IsValid) { // Logic to save return RedirectToAction("Index"); } return View(group); } That's my view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Group>" %> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> <div> <%=Html.ActionLink("Back", "Index") %> </div> </asp:Content> But ModelState is always invalid! As I can see, for MVC validation 0 is invalid, but for me is valid. How can I fix it since, I didn't put any kind of validation in Id property? UPDATE: I don't know how or why, but renaming Id, in my case to PK, solves this problem. Do you know if this an issue in my logic/configuration or is an bug or expected behavior? Thank you

    Read the article

  • First-chance exception at std::set dectructor

    - by bartek
    Hi, I have a strange exception at my class destructor: First-chance exception reading location 0x00000 class DispLst{ // For fast instance existance test std::set< std::string > instances; [...] DispLst::~DispLst(){ this->clean(); DeleteCriticalSection( &instancesGuard ); } <---- here instances destructor raises exception Call stack: X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::begin() Line 556 + 0xc bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::_Tidy() Line 1421 + 0x64 bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::~_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 () Line 541 C++ X.exe!std::set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ::~set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator () + 0x2b bytes C++ X.exe!DispLst::~DispLst() Line 82 + 0xf bytes C++ The exact place of error in xtree: void _Tidy() { // free all storage erase(begin(), end()); <------------------- HERE this->_Alptr.destroy(&_Left(_Myhead)); this->_Alptr.destroy(&_Parent(_Myhead)); this->_Alptr.destroy(&_Right(_Myhead)); this->_Alnod.deallocate(_Myhead, 1); _Myhead = 0, _Mysize = 0; } iterator begin() { // return iterator for beginning of mutable sequence return (_TREE_ITERATOR(_Lmost())); <---------------- HERE } What is going on ? I'm using Visual Studio 2008.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >