Search Results

Search found 269 results on 11 pages for 'alistair bell'.

Page 9/11 | < Previous Page | 5 6 7 8 9 10 11  | Next Page >

  • Extra Life 2012

    - by Chris Gardner
    Greetings, It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations... Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samual that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so. Don't live near me? Live closer to a children's hospital in the Children's Miracle Network? It's OK. Go find a participant that is working for your hospital and hook them up. Just left me know, I will will join in with the karmic love you will already receive. This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.) Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up. You can place your bid at the link below. Feel free to spread the word to anyone and everyone. I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla. Enjoy your burrito. http://www.extra-life.org/participant/cgardner

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • Am I the only one this anal / obsessive about code? [closed]

    - by Chris
    While writing a shared lock class for sql server for a web app tonight, I found myself writing in the code style below as I always do: private bool acquired; private bool disposed; private TimeSpan timeout; private string connectionString; private Guid instance = Guid.NewGuid(); private Thread autoRenewThread; Basically, whenever I'm declaring a group of variables or writing a sql statement or any coding activity involving multiple related lines, I always try to arrange them where possible so that they form a bell curve (imagine rotating the text 90deg CCW). As an example of something that peeves the hell out of me, consider the following alternative: private bool acquired; private bool disposed; private string connectionString; private Thread autoRenewThread; private Guid instance = Guid.NewGuid(); private TimeSpan timeout; In the above example, declarations are grouped (arbitrarily) so that the primitive types appear at the top. When viewing the code in Visual Studio, primitive types are a different color than non-primitives, so the grouping makes sense visually, if for no other reason. But I don't like it because the right margin is less of an aesthetic curve. I've always chalked this up to being OCD or something, but at least in my mind, the code is "prettier". Am I the only one?

    Read the article

  • Problem graphics with ATI Radeon x1270 (RS690M) on Ubuntu 12.04

    - by Giuseppe Della Corte
    I'm Italian and so I apologize for my English! I'm a beginner to Ubuntu and I tried it on my desktop PC and it's fantastic, fast and fun! I decided to try it on my netbook Packard Bell DOT M/A and this is the configuration: AMD Athlon L110 1.2GHz 2GB of RAM ATI Radeon x1270 (RS690M) 150GB Hard Disk I installed Ubuntu 12.04 with Wubi (dualboot Windows 7 + Ubuntu 12.04), because the netbook does not have a DVD player! During the installation everything is OK, after it was installed I see mistakes in the graphical display! I see objects that are seen in different colors and moving (buttons, mouse pointer, text bar ... etc ...) At one point came not to see anything, such as this screenshot: The drivers are open, that is already installed in Ubuntu. Windows 7 video card runs fine, can run well Aero (transparent window effects), I can watch movies in HD and play some games with the Catalyst drivers from AMD. Now I ask a favor, you can help me solve this problem? Is there a fix for this driver or drivers different on the Internet? Thanks for your attention, good bye!

    Read the article

  • Google Analytics: How long does it take users to trigger an event

    - by Stephen Ostermiller
    I implemented Google Analytics event tracking on my currency conversion website. The typical user flow is: User lands on a page about two currencies. User enters an amount to be converted. The site shows the user the value in the other currency. The JavaScript sends Google Analytics an "converted" event when the currency conversion is done. Because most of the sessions on my site are single page, the event tracking is very important to me to be able to know if users find my page useful. I'm looking for a way to be able to figure out how long it typically takes users to enter a value in the form. I expect that this data would form a bell curve with around a specific amount of time after page load. If I can't get a graph, I could make do with a median value. I would like to be able to use this as a core metric around usability testing. Is there a way to get this information out of Google Analytics?

    Read the article

  • Dual booted Windows 7 freezes after login screen

    - by Cathal
    First-time Linux user, using a Packard Bell Easy Note TS laptop. My problem arose after I dual boot installed Ubuntu 12.04 on Windows 7 via WUBI. I backed up all my data, and reinstalled Windows from factory settings on the recovery partition. When I first tried to install Ubuntu I mistakenly closed the lid at the start of the installation, stopping it. After that I rebooted, and my second installation attempt went without a hitch. Ubuntu works perfectly, the data on the partitions seem to be fine. My problem is I can't log back in to Windows 7. After selecting it in GRUB, and then in the Windows 7/ WUBI choice on boot, it loads up perfectly til the user log in screen. After the password is inputted, it stalls on the "Welcome" busy screen. This happens in Safe mode as well. Startup repair can't find a problem and neither can CHKDSK. System restore and Last known good config have no effect either. If anyone could help me out, I'd be real grateful. edit in response to the question below, since I don't know how to comment: Windows was installed first and its partitions are the first on the list. Should I move the windows partitions to after the Linux ones on the disk? Thanks for your help.

    Read the article

  • What is the R Language?

    - by TATWORTH
    I encountered the R Language recently with O'Reilly books and while from the context I knew it was a language for dealing with statistics, doing a web search for the support web site was futile. However I have now located the web site and it is at http://www.r-project.org/R is a free language available for a number of platforms including windows. CRAN mirrors are available at a number of locations worldwide.Here is the official description:"R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control. R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS."

    Read the article

  • C# array of objects - conditional validation

    - by fishdump
    Sorry about the vague title! I have an class with a number of member variables (system, zone, site, ...) public sealed class Cello { public String Company; public String Zone; public String System; public String Site; public String Facility; public String Process; //... } I have an array of objects of this class. private Cello[] m_cellos = null; // ... I need to know whether the array contains objects with the same site but different systems, zones or companies since such a situation would be illegal. I have various other checks to make but they are all along similar lines. The Array class has a number of functions that look promising but I am not very up on defining 'key selector' functions and things like that. Any suggestions or pointers would be greatly appreciated. --- Alistair.

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • Consuming WebSphere from WCF client: Unable to create AxisService from ServiceEndpointAddress

    - by JohnIdol
    I am consuming (or trying to consume) a WebSphere service from a WCF client (service reference + bindings generated through svcutil). Connection seems to be established successfully but I am getting the following error: CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress [address] Rings any bell? I am guessing the request format is somehow being rejected by the service, I am sniffing it with fiddler and it looks fine overall (can post if ppl think it could help). Found this article, but it doesn't seem to apply to my case. Any help appreciated!

    Read the article

  • Consuming WebSphere service from WCF client: Unable to create AxisService from ServiceEndpointAddres

    - by JohnIdol
    I am consuming (or trying to consume) a WebSphere service from a WCF client (service reference + bindings generated through svcutil). Connection seems to be established successfully but I am getting the following error: CWWSS7200E: Unable to create AxisService from ServiceEndpointAddress [address] Rings any bell? I am guessing the request format is somehow being rejected by the service, I am sniffing it with fiddler and it looks fine overall (can post if ppl think it could help). Found this article, but it doesn't seem to apply to my case. Any help appreciated!

    Read the article

  • How do I determine a best-fit distribution in java?

    - by Eadwacer
    I have a bunch of sets of data (between 50 to 500 points, each of which can take a positive integral value) and need to determine which distribution best describes them. I have done this manually for several of them, but need to automate this going forward. Some of the sets are completely modal (every datum has the value of 15), some are strongly modal or bimodal, some are bell-curves (often skewed and with differing degrees of kertosis/pointiness), some are roughly flat, and there are any number of other possible distributions (possion, power-law, etc.). I need a way to determine which distribution best describes the data and (ideally) also provides me with a fitness metric so that I know how confident I am in the analysis. Existing open-source libraries would be ideal, followed by well documented algorithms that I can implement myself.

    Read the article

  • Problem reading from the StandarOutput from ftp.exe. Possible System.Diagnostics.Process Framework b

    - by SoMoS
    Hello, I was trying some stuff executing console applications when I found this problem handling the I/O of the ftp.exe command that everybody has into the computer. Just try this code: m_process = New Diagnostics.Process() m_process.StartInfo.FileName = "ftp.exe" m_process.StartInfo.CreateNoWindow = True m_process.StartInfo.RedirectStandardInput = True m_process.StartInfo.RedirectStandardOutput = True m_process.StartInfo.UseShellExecute = False m_process.Start() m_process.StandardInput.AutoFlush = True m_process.StandardInput.WriteLine("help") MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) MsgBox(m_process.StandardOutput.ReadLine()) This should show you the text that ftp sends you when you do that from the command line: Los comandos se pueden abreviar. Comandos: ! delete literal prompt send ? debug ls put status append dir mdelete pwd trace ascii disconnect mdir quit type bell get mget quote user binary glob mkdir recv verbose bye hash mls remotehelp cd help mput rename close lcd open rmdir Instead of that I'm getting the first line and 3 more with garbage, after that the call to ReadLine block like if there was no data available. Any hints about that?

    Read the article

  • iPhone - how to store documents consisting of multiple images?

    - by Joe Strout
    My iPhone (actually, iPad) app creates documents that consist of several images, plus a bit of metadata. What's the best practice for storing these sorts of documents on disk? I see two main options: Create a folder for each document, and store my images as separate PNG files within the folder (plus another little file for the metadata). Create a single file which contains all images and metadata. But I'm not sure how to easily do option 2. I think I can convert my images in PNG format to/from NSData, but then what? I'm still a newbie at Cocoa, but I believe I saw something about stuffing mixed data into some NSSomethingOrOther and having this write itself out to disk, and read itself back in later. Does this ring a bell with anyone? And, will it work with large binary blobs of data like my images? Or would you recommend I simply go with option 1?

    Read the article

  • Delphi 7 compile error - “Duplicate resource(s)” between .res and .dfm

    - by Robo
    I got a very similar error to the one below: http://stackoverflow.com/questions/97800/how-can-i-fix-this-delphi-7-compile-error-duplicate-resources However, the error I got is this: [Error] WARNING. Duplicate resource(s): [Error] Type 10 (RCDATA), ID TFMMAINTQUOTE: [Error] File P:\[PATH SNIPPED]\Manufacturing.RES resource kept; file FMaintQuote.DFM resource discarded. Manufacturing.res is the default resource file (application is called Manufacturing.exe), and FMainQuote is one of the forms. .dfm files are plain text files, so I'm not sure what resources is being duplicated, how to find it and fix it? If I tried to compile the project again, it works OK, but the exe's icon is different to the one I've set in Project Options using the "Load Icon" button. The icon on the app is some sort of bell image that I don't recognize.

    Read the article

  • Smoothing Small Data Set With Second Order Quadratic Curve

    - by Rev316
    I'm doing some specific signal analysis, and I am in need of a method that would smooth out a given bell-shaped distribution curve. A running average approach isn't producing the results I desire. I want to keep the min/max, and general shape of my fitted curve intact, but resolve the inconsistencies in sampling. In short: if given a set of data that models a simple quadratic curve, what statistical smoothing method would you recommend? If possible, please reference an implementation, library, or framework. Thanks SO!

    Read the article

  • How to decomment an html/php webpage?

    - by Sam
    A crazy question: Imagine a webpage file called somepage.php And it contains some html php contents in my editor I see: <html><head></head><body> <?=$welcome . $essay . $thatsAllForNowFolks . $footer ?> <!-- Blue Ball Bell Blow Bows Bats Beef Bark Bill Boss --> </body></html> When I browse my site I see those comments in the final result, while I only want that comment to be only in my editor for my secretive inspirations and don't want the whole world to know what I'm thinking when I'm developing, as well as I see those comments for any and all my website visitors as wasted bandwitch of internet speed. How do I decomment my entire html/php files at the moment the html is served? Ideas, code and suggestions are much appreciated. My thanks in advance...

    Read the article

  • ORMs and Constructors

    - by Harper Shelby
    I'm looking over .NET ORM implementations, and I have a major burning question - are there any .NET ORM implemenations that don't require public properties for every field in the database? When I see examples like this, a little bell goes off in my head. I firmly believe in encapsulation, and being forced to open the kimono of my objects just to make them work nicely with persistence frameworks gives me the heebie-jeebies. Is this sort of accessibility required in all ORMs out there? If not, please point me to examples of those that don't need it!

    Read the article

  • byte-sized bit pattern in C and its relevance?

    - by Nikunj Banka
    I a reading Kerninghan and Ritchie's C programming language book and on page 37 it mentions byte sized bit patterns like : '\013' for vertical tab . '\007' for bell character . My doubts : What is byte sized in it and and what's a bit pattern ? What relevance does this hold and where can I apply it ? Is it in any sense related to escape sequences ? I can't seem to find any information what so ever about these byte sized bit patterns on the web . please help . thanks .

    Read the article

  • Code snippet manager suggestions

    - by dave
    I'm looking for a code snippet manager per the following: Usable on Windows stand-alone product desktop-based (not online) Free or paid Has PHP syntax highlighting I've found the following, but they don't seem to quite ring the bell (although they are good products): -- Snip-It Pro (not free) -- Has syntax highlighting, but seems "not there yet." -- The Guide (free: SourceForge) Tree-based info manager, no syntax highlighting. -- ActionOutline (free, upgrade not free) Tree-based info manager, no syntax highlighting. There have been questions about this before on stackoverflow, but the last one was over a year ago (over 400 answers), which is where I got the products listed above. Just wondering if I've overlooked anything produced more recently. Thanks for any help.

    Read the article

  • Why is '\x' invalid in Python?

    - by Paul McGuire
    I was experimenting with '\' characters, using '\a\b\c...' just to enumerate for myself which characters Python interprets as control characters, and to what. Here's what I found: \a - BELL \b - BACKSPACE \f - FORMFEED \n - LINEFEED \r - RETURN \t - TAB \v - VERTICAL TAB Most of the other characters I tried, '\g', '\s', etc. just evaluate to the 2-character string of a backslash and the given character. I understand this is intentional, and makes sense to me. But '\x' is a problem. When my script reaches this source line: val = "\x" I get: ValueError: invalid \x escape What is so special about '\x'? Why is it treated differently from the other non-escaped characters?

    Read the article

  • Generating pagination links

    - by alpheus
    I am trying to implement a paging system that displays nearby page numbers as well as pages at each extreme. For example, if the user is on page 20 of 40, the following links should be displayed: 1, 2 ... 18, 19, [20], 21, 22 ... 39, 40. The solution would be similar to the one described here: http://90poe.com/alex-lee-on-bell-curve-pagination I have seen code to do this in PHP, but not in ASP.net (ideally I am looking for C# code). If anyone has done anything like this previously, it would be very helpful to see your code.

    Read the article

  • C# Console Application - Odd behaviour - char '\a'

    - by KHT
    After extensive debugging of an application, I noticed the console window would hang when searching text for the char '\a'. The goal is to strip out characters from a file. The console window would always hang upon exiting the program, and it would make it to the last statement of main. I removed the '\a' from the switch statement and the console application does not hang anymore. Any idea why? I still need to strip out the char '\a', but cannot get the application to work without hanging. switch (c) { case '\t': //Horizontal Tab case '\v': //Vertical Tab case '\n': //Newline case '\f': //Form feed case '\r': //carriage return case '\b': //Backspace case '\x7f': //delete character case '\x99': //TM Trademark case '\a': //Bell Alert **REMOVED THIS** return true; }

    Read the article

  • Slides and Code from my Silverlight MVVM Talk at DevConnections

    - by dwahlin
    I had a great time at the DevConnections conference in Las Vegas this year where Visual Studio 2010 and Silverlight 4 were launched. While at the conference I had the opportunity to give a full-day Silverlight workshop as well as 4 different talks and met a lot of people developing applications in Silverlight. I also had a chance to appear on a live broadcast of Channel 9 with John Papa, Ward Bell and Shawn Wildermuth, record a video with Rick Strahl covering jQuery versus Silverlight and record a few podcasts on Silverlight and ASP.NET MVC 2.  It was a really busy 4 days but I had a lot of fun chatting with people and hearing about different business problems they were solving with ASP.NET and/or Silverlight. Thanks to everyone who attended my sessions and took the time to ask questions and stop by to talk one-on-one. One of the talks I gave covered the Model-View-ViewModel pattern and how it can be used to build architecturally sound applications. Topics covered in the talk included: Understanding the MVVM pattern Benefits of the MVVM pattern Creating a ViewModel class Implementing INotifyPropertyChanged in a ViewModelBase class Binding a ViewModel declaratively in XAML Binding a ViewModel with code ICommand and ButtonBase commanding support in Silverlight 4 Using InvokeCommandBehavior to handle additional commanding needs Working with ViewModels and Sample Data in Blend Messaging support with EventBus classes, EventAggregator and Messenger My personal take on code in a code-beside file (I’m all in favor of it when used appropriately for message boxes, child windows, animations, etc.) One of the samples I showed in the talk was intended to teach all of the concepts mentioned above while keeping things as simple as possible.  The sample demonstrates quite a few things you can do with Silverlight and the MVVM pattern so check it out and feel free to leave feedback about things you like, things you’d do differently or anything else. MVVM is simply a pattern, not a way of life so there are many different ways to implement it. If you’re new to the subject of MVVM check out the following resources. I wish this talk would’ve been recorded (especially since my live and canned demos all worked :-)) but these resources will help get you going quickly. Getting Started with the MVVM Pattern in Silverlight Applications Model-View-ViewModel (MVVM) Explained Laurent Bugnion’s Excellent Talk at MIX10     Download sample code and slides from my DevConnections talk     For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

< Previous Page | 5 6 7 8 9 10 11  | Next Page >