Search Results

Search found 21352 results on 855 pages for 'bit shift'.

Page 227/855 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • Basic question on retain/release semantics from Apple's reference library

    - by davetron5000
    I have done Objective-C way back when, and have recently (i.e. just now) read the documentation on Apple's site regarding the use of retain and release. However, there is a bit of code in their Creating an iPhone Application page that has me a bit confused: - (void)setUpPlacardView { // Create the placard view -- it calculates its own frame based on its image. PlacardView *aPlacardView = [[PlacardView alloc] init]; self.placardView = aPlacardView; [aPlacardView release]; // What effect does this have on self.placardView?! placardView.center = self.center; [self addSubview:placardView]; } Not seeing the entire class, it seems that self.placardView is also a PlacardView * and the assignment of it to aPlacardView doesn't seem to indicate it will retain a reference to it. So, it appears to me that the line I've commented ([aPlacardView release];) could result in aPlacardView having a retain count of 0 and thus being deallocated. Since self.placardView points to it, wouldn't that now point at deallocated memory and cause a problem?

    Read the article

  • Java M4A atom tagging free space issue

    - by Brett
    Hey, I've been trying to be able to read and write iTunes style M4A atoms and while I've successfully done the reading part, I've come to a bit of a halt in regards to the free space atoms. I figured that I should be able edit and shift the padding around to accommodate writing an atom with more data than it originally had. I've been stuck on this for about a day now, and I've been trying to figure out how to determine the closest free space atom with enough size to accommodate the new data. so far I have: private freeAtom acquireFreeSpaceAtom( long position ) { long atomStart = Long.MAX_VALUE; freeAtom atom = null; for( freeAtom a : freeSpace ) { if( Math.abs( position - atomStart ) > Math.abs( position - a.getAtomStart() ) ) atomStart = ( atom = a ).getAtomStart(); } return atom; } That code only takes into account the closest free space atom and completely disregards the fact that it should be greater than or equal to a certain size, but I can't quite figure out how I should check for both closeness and size efficiently.

    Read the article

  • Implicit Type Conversions in Reflection

    - by bradhe
    So I've written a quick bit of code to help quickly convert between business objects and view models. Not to pimp my own blog, but you can find the details here if you're interested or need to know. One issue I've run in to is that I have a custom collection type, ProductCollection, and I need to turn that in to a string[] in on my model. Obviously, since there is no default implicit cast, I'm getting an exception in my contract converter. So, I thought I would write the next little bit of code and that should solve the problem: public static implicit operator string[](ProductCollection collection) { var list = new List<string>(); foreach (var product in collection) { if (product.Id == null) { list.Add(null); } else { list.Add(product.Id.ToString()); } } return list.ToArray(); } However, it still fails with the same cast exception. I'm wondering if it has something to do with being in reflection? If so, is there anything that I can do here?? I'm open to architectural solutions, too!

    Read the article

  • How to easily map c++ enums to strings

    - by Roddy
    I have a bunch of enum types in some library header files that I'm using, and I want to have a way of converting enum values to user strings - and vice-versa. RTTI won't do it for me, because the 'user strings' need to be a bit more readable than the enumerations. A brute force solution would be a bunch of functions like this, but I feel that's a bit too C-like. enum MyEnum {VAL1, VAL2,VAL3}; String getStringFromEnum(MyEnum e) { switch e { case VAL1: return "Value 1"; case VAL2: return "Value 2"; case VAL1: return "Value 3"; default: throw Exception("Bad MyEnum"); } } I have a gut feeling that there's an elegant solution using templates, but I can't quite get my head round it yet. UPDATE: Thanks for suggestions - I should have made clear that the enums are defined in a third-party library header, so I don't want to have to change the definition of them. My gut feeling now is to avoid templates and do something like this: char * MyGetValue(int v, char *tmp); // implementation is trivial #define ENUM_MAP(type, strings) char * getStringValue(const type &T) \ { \ return MyGetValue((int)T, strings); \ } ; enum eee {AA,BB,CC}; - exists in library header file ; enum fff {DD,GG,HH}; ENUM_MAP(eee,"AA|BB|CC") ENUM_MAP(fff,"DD|GG|HH") // To use... eee e; fff f; std::cout<< getStringValue(e); std::cout<< getStringValue(f);

    Read the article

  • How to create Web services and link it to Smart device Application?

    - by Crimsonland
    I am a fresh graduate and hired to develop Smart Device Application.They use Data logic Memoir with windows CE 5.0. Even though i have novice skills in programming in vb.net,I just finish my project and applications for Data logic memoir wherein the data has been save to text file or SQL compact server database in the Handheld Device and use Active Sync Connection to pass Data into H/PC to Desktop P.C and Vice Versa.I just shift now to C#.net and start learning it and i just successfully transpose my Smart Device Project from VB to C#. But now i want to Start Projects linking The device to server using Web services but i don't have any idea how to begin?

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Python: Created nested dictionary from list of paths

    - by sberry2A
    I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part) [ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ] Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN. For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128. So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} }, 'ObjK' : {'part1':{spec}, 'partN':{spec} } } To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this: class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ The method for creating the nested "dotdict"s looks like this: def addPath(self, spec, parts, base): if len(parts) > 1: item = base.setdefault(parts[0], dotdict()) self.addPath(spec, parts[1:], item) else: item = base.setdefault(parts[0], spec) return base Then I just do something like: for path, spec in paths: self.lookup = dotdict() self.addPath(spec, path.split("."), self.lookup) So, in the end self.lookup.Obj4.part500 points to the spec. Is there a better (more pythonic) way to do this?

    Read the article

  • Toshiba Laptop Problem - Black Screen on Startup [using Win Vista]

    - by BubblySue
    Hi guys, can anyone help me on my problem? I only see black screen after the startup.. It just shows the logo and the status bar upon start, then it goes black screen with moveable cursor. I tried alt+ctrl+del, but it doesn't work. I pressed shift 5 times and it makes a sound. I already removed battery and restarted it, but still the same. I can go to safe mode and scanned thru there. Still, desktop won't show up. Don't know what else to check? Been searching the net for solutions. Please help? :(

    Read the article

  • Integer array or struct array - which is better?

    - by MusiGenesis
    In my app, I'm storing Bitmap data in a two-dimensional integer array (int[,]). To access the R, G and B values I use something like this: // read: int i = _data[x, y]; byte B = (byte)(i >> 0); byte G = (byte)(i >> 8); byte R = (byte)(i >> 16); // write: _data[x, y] = BitConverter.ToInt32(new byte[] { B, G, R, 0 }, 0); I'm using integer arrays instead of an actual System.Drawing.Bitmap because my app runs on Windows Mobile devices where the memory available for creating bitmaps is severely limited. I'm wondering, though, if it would make more sense to declare a structure like this: public struct RGB { public byte R; public byte G; public byte B; } ... and then use an array of RGB instead of an array of int. This way I could easily read and write the separate R, G and B values without having to do bit-shifting and BitConverter-ing. I vaguely remember something from days of yore about byte variables being block-aligned on 32-bit systems, so that a byte actually takes up 4 bytes of memory instead of just 1 (but maybe this was just a Visual Basic thing). Would using an array of structs (like the RGB example` above) be faster than using an array of ints, and would it use 3/4 the memory or 3 times the memory of ints?

    Read the article

  • Indices instead of pointers in STL containers?

    - by zvrba
    Due to specific requirements [*], I need a singly-linked list implementation that uses integer indices instead of pointers to link nodes. The indices are always interpreted with respect to a vector containing the list nodes. I thought I might achieve this by defining my own allocator, but looking into the gcc's implementation of , they explicitly use pointers for the link fields in the list nodes (i.e., they do not use the pointer type provided by the allocator): struct _List_node_base { _List_node_base* _M_next; ///< Self-explanatory _List_node_base* _M_prev; ///< Self-explanatory ... } (For this purpose, the allocator interface is also deficient in that it does not define a dereference function; "dereferencing" an integer index always needs a pointer to the underlying storage.) Do you know a library of STL-like data structures (i am mostly in need of singly- and doubly-linked list) that use indices (wrt. a base vector) instead of pointers to link nodes? [*] Saving space: the lists will contain many 32-bit integers. With two pointers per node (STL list is doubly-linked), the overhead is 200%, or 400% on 64-bit platform, not counting the overhead of the default allocator.

    Read the article

  • What Windows message is fired for mouse-up with modifier keys?

    - by Greg
    My WndProc isn't seeing mouse-up notifications when I click with a modifier key (shift or control) pressed. I see them without the modifier key, and I see mouse-down notifications with the modifier keys. I'm using the Windows Forms NativeWindow wrapper to get Windows messages from the WndProc() method. I've tried tracking the notifications I do get, and I the only clue I see is WM_CAPTURECHANGED. I've tried calling SetCapture when I receive the WM_LBUTTONDOWN message, but it doesn't help. Without modifier (skipping paint, timer and NCHITTEST messages): WM_LBUTTONDOWN WM_SETCURSOR WM_MOUSEMOVE WM_SETCURSOR WM_LBUTTONUP With modifier (skipping paint, timer and NCHITTEST messages): WM_KEYDOWN WM_PARENTNOTIFY WM_MOUSEACTIVATE WM_MOUSEACTIVATE WM_SETCURSOR WM_LBUTTONDOWN WM_SETCURSOR (repeats) WM_KEYDOWN (repeats) WM_KEYUP If I hold the mouse button down for a long time, I can usually get a WM_LBUTTONUP notification, but it should be possible to make it more responsive.. What am I missing? Thanks.

    Read the article

  • Windows 7 versus Windows XP multithreading - Delphi app not acting right

    - by Robert Oschler
    I'm having a problem with a Delphi Pro 6 application that I wrote on my Windows XP machine when it runs on Windows 7. I don't have Windows 7 to test yet and I'm trying to see if Windows 7 might be the source of the trouble. Is there a fundamental difference between the way Windows 7 handles threads compared to Windows XP? I am seeing things happen out of sequence in my error logs on Windows 7 and it's causing problems. For example, objects that should have been initialized are uninitialized when running on Windows 7, yet those objects are initialized on Windows XP by the time they are needed. Some questions: 1) Are there any core differences that could cause threads/processes to behave differently between the two operating system versions? 2) I know this next question may seem absurd, but does Windows 7 attempt to split/fork threads that aren't split/forked on Windows XP? 3) And lastly, are there any known issues with FPU handling that can cause XP programs trouble when run on Windows 7 due to operational differences in wait state handling or register storage, or perhaps something like Exception mask settings, etc? 4) Any 32-bit versus 64-bit issues that could be creating trouble here? -- roschler

    Read the article

  • How to modify Keyboard interrupt (under Windows XP) from a C++ Program ?

    - by rockr90
    Hi everyone ! We have been given a little project (As part of my OS course) to make a Windows program that modifies keyboard input, so that it transforms any lowercase character entered into an uppercase one (without using caps-lock) ! so when you type on the keyboard you'll see what you're typing transformed into uppercase ! I have done this quite easily using Turbo C by calling geninterrupt() and using variables _AH, _AL, i had to read a character using: _AH = 0x07; // Reading a character without echo geninterrupt(0x21); // Dos interrupt Then to transform it into an Upercase letter i have to mask the 5th bit by using: _AL = _AL & 0xDF; // Masking the entered character with 11011111 and then i will display the character using any output routine. Now, this solution will only work under old C DOS compilers. But what we intend to do is to make a close or similar solution to this by using any modern C/C++ compiler under Windows XP ! What i have first thought of is modifying the Keyboard ISR so that it masks the fifth bit of any entered character to turn it uppercase ! But i do not know how exactly to do this. Second, I wanted to create a Win32 console program to either do the same solution (but to no avail) or make a windows-compatible solution, still i do not know which functions to use ! Third I thought to make a windows program that modifies the ISR directly to suit my needs ! and i'm still looking for how to do this ! So please, If you could help me out on this, I would greatly appreciate it ! Thank you in advance ! (I'm using Windows XP on intel X86 with mingw-GCC compiler.)

    Read the article

  • Design Philosophy Question - When to create new functions

    - by Eclyps19
    This is a general design question not relating to any language. I'm a bit torn between going for minimum code or optimum organization. I'll use my current project as an example. I have a bunch of tabs on a form that perform different functions. Lets say Tab 1 reads in a file with a specific layout, tab 2 exports a file to a specific location, etc. The problem I'm running into now is that I need these tabs to do something slightly different based on the contents of a variable. If it contains a 1 I may need to use Layout A and perform some extra concatenation, if it contains a 2 I may need to use Layout B and do no concatenation but add two integer fields, etc. There could be 10+ codes that I will be looking at. Is it more preferable to create an individual path for each code early on, or attempt to create a single path that branches out only when absolutely required. Creating an individual path for each code would allow my code to be extremely easy to follow at a glance, which in turn will help me out later on down the road when debugging or making changes. The downside to this is that I will increase the amount of code written by calling some of the same functions in multiple places (for example, steps 3, 5, and 9 for every single code may be exactly the same. Creating a single path that would branch out only when required will be a bit messier and more difficult to follow at a glance, but I would create less code by placing conditionals only at steps that are unique. I realize that this may be a case-by-case decision, but in general, if you were handed a previously built program to work on, which would you prefer?

    Read the article

  • Storing header and data sections in a CSV file

    - by morpheous
    This should be relatively easy to do, but after several hours straight programming my mind seems a bit frazzled and could do with some help. I have a C++ class which I am currently using to store read/write data to file. I was initially using binary data, but have decided to store the data as CSV in order to let programs written in other languages be able to load the data. The C++ class looks a bit like this: class BinaryData { public: BinaryData(); void serialize(std::ostream& output) const; void deserialize(std::istream& input); private: Header m_hdr; std::vector<Row> m_rows; }; I am simply rewriting the serialize/deserialize methods to write to a CSV file. I am not sure on the "best" way to store a header section and a "data" section in a "flat" CSV file though - any suggestions on the most sensible way to do this?

    Read the article

  • Perl passing argument into eval

    - by ehretf
    I'm facing an issue using eval function. Indeed I have some function name inside a SQL database, my goal is to execute those functions within perl (after retrieve in SQL). Here is what I'm doing, considering that $RssSource-{$k}{Proceed} contains "&test" as a string retrieved from SQL: my $str2 = "ABCD"; eval "$RssSource->{$k}{Proceed}";warn if $@; sub test { my $arg = shift; print "fct TEST -> ", $row, "\n"; } This is working correctly and display: fct TEST -> However I would like to be able to pass $str2 as an argument to $RssSource-{$k}{Proceed} but I don't know how, every syntax I tried return an error: eval "$RssSource->{$k}{Proceed}$str2" eval "$RssSource->{$k}{Proceed}($str2)" eval "$RssSource->{$k}{Proceed}"$str2 eval "$RssSource->{$k}{Proceed}"($str2) May someone tell me how to properly pass an argument to the evaluated function? Thanks a lot for your help Regards. Florent

    Read the article

  • Why doesn't java.lang.Number implement Comparable?

    - by Julien Chastang
    Does anyone know why java.lang.Number does not implement Comparable? This means that you cannot sort Numbers with Collections.sort which seems to me a little strange. Post discussion update: Thanks for all the helpful responses. I ended up doing some more research about this topic. The simplest explanation for why java.lang.Number does not implement Comparable is rooted in mutability concerns. For a bit of review, java.lang.Number is the abstract super-type of AtomicInteger, AtomicLong, BigDecimal, BigInteger, Byte, Double, Float, Integer, Long and Short. On that list, AtomicInteger and AtomicLong to do not implement Comparable. Digging around, I discovered that it is not a good practice to implement Comparable on mutable types because the objects can change during or after comparison rendering the result of the comparison useless. Both AtomicLong and AtomicInteger are mutable. The API designers had the forethought to not have Number implement Comparable because it would have constrained implementation of future subtypes. Indeed, AtomicLong and AtomicInteger were added in Java 1.5 long after java.lang.Number was initially implemented. Apart from mutability, there are probably other considerations here too. A compareTo implementation in Number would have to promote all numeric values to BigDecimal because it is capable of accommodating all the Number sub-types. The implication of that promotion in terms of mathematics and performance is a bit unclear to me, but my intuition finds that solution kludgy.

    Read the article

  • Sintra app in a gem

    - by JP
    I have a Sinatra application I've created and I'd like to package it as a gem-based binary. I have my gemspec and gem set up to generate a suitable executable that points to the my_sinatra_app.rb (which is executable) but the sinatra server never runs. Any ideas why and how to make it work? my_sinatra_app executable: #!/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby # # This file was generated by RubyGems. require 'rubygems' version = ">= 0" if ARGV.first =~ /^_(.*)_$/ and Gem::Version.correct? $1 then version = $1 ARGV.shift end gem 'my_sinatra_app', version load Gem.bin_path('my_sinatra_app', 'my_sinatra_app', version)

    Read the article

  • Resource mapping in a Ruby on Rails URL (RESTful API)

    - by randombits
    I'm having a bit of difficulty coming up with the right answer to this, so I will solicit my problem here. I'm working on a RESTFul API. Naturally, I have multiple resources, some of which consist of parent to child relationships, some of which are stand alone resources. Where I'm having a bit of difficulty is figuring out how to make things easier for the folks who will be building clients against my API. The situation is this. Hypothetically I have a 'Street' resource. Each street has multiple homes. So Street :has_many to Homes and Homes :belongs_to Street. If a user wants to request an HTTP GET on a specific home resource, the following should work: http://mymap/streets/5/homes/10 That allows a user to get information for a home with the id 10. Straight forward. My question is, am I breaking the rules of the book by giving the user access to: http://mymap/homes/10 Technically that home resource exists on its own without the street. It makes sense that it exists as its own entity without an encapsulating street, even though business logic says otherwise. What's the best way to handle this?

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • What is the magic behind perl read() function and buffer which is not a ref ?

    - by alex8657
    I do not get to understand how the Perl read($buf) function is able to modify the content of the $buf variable. $buf is not a reference, so the parameter is given by copy (from my c/c++ knowledge). So how come the $buf variable is modified in the caller ? Is it a tie variable or something ? The C documentation about setbuf is also quite elusive and unclear to me # Example 1 $buf=''; # It is a scalar, not a ref $bytes = $fh->read($buf); print $buf; # $buf was modified, what is the magic ? # Example 2 sub read_it { my $buf = shift; return $fh->read($buf); } my $buf; $bytes = read_it($buf); print $buf; # As expected, this scope $buf was not modified

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • Where can I find soft-multiply and divide algorithms?

    - by srking
    I'm working on a micro-controller without hardware multiply and divide. I need to cook up software algorithms for these basic operations that are a nice balance of compact size and efficiency. My C compiler port will employ these algos, not the the C developers themselves. My google-fu is so far turning up mostly noise on this topic. Can anyone point me to something informative? I can use add/sub and shift instructions. Table lookup based algos might also work for me, but I'm a bit worried about cramming so much into the compiler's back-end...um, so to speak. Thanks!

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >