Search Results

Search found 22841 results on 914 pages for 'aspect orientated program'.

Page 868/914 | < Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >

  • Do you use an exception class in your Perl programs? Why or why not?

    - by daotoad
    I've got a bunch of questions about how people use exceptions in Perl. I've included some background notes on exceptions, skip this if you want, but please take a moment to read the questions and respond to them. Thanks. Background on Perl Exceptions Perl has a very basic built-in exception system that provides a spring-board for more sophisticated usage. For example die "I ate a bug.\n"; throws an exception with a string assigned to $@. You can also throw an object, instead of a string: die BadBug->new('I ate a bug.'); You can even install a signal handler to catch the SIGDIE psuedo-signal. Here's a handler that rethrows exceptions as objects if they aren't already. $SIG{__DIE__} = sub { my $e = shift; $e = ExceptionObject->new( $e ) unless blessed $e; die $e; } This pattern is used in a number of CPAN modules. but perlvar says: Due to an implementation glitch, the $SIG{DIE} hook is called even inside an eval(). Do not use this to rewrite a pending exception in $@ , or as a bizarre substitute for overriding CORE::GLOBAL::die() . This strange action at a distance may be fixed in a future release so that $SIG{DIE} is only called if your program is about to exit, as was the original intent. Any other use is deprecated. So now I wonder if objectifying exceptions in sigdie is evil. The Questions Do you use exception objects? If so, which one and why? If not, why not? If you don't use exception objects, what would entice you to use them? If you do use exception objects, what do you hate about them, and what could be better? Is objectifying exceptions in the DIE handler a bad idea? Where should I objectify my exceptions? In my eval{} wrapper? In a sigdie handler? Are there any papers, articles or other resources on exceptions in general and in Perl that you find useful or enlightening.

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • Weird Datagrid / paint behaviour

    - by Shane.C
    The scenario: A method sends out a broadcast packet, and returned packets that are validated are deemed okay to be added as a row in my datagrid (The returned packets are from devices i want to add into my program). So for each packet returned, containing information about a device, i create a new row. This is done by first sending packets out, creating rows and adding them to a list of rows that are to be added, and then after 5 seconds (In which case all packets would have returned by then) i add the rows. Here's a few snippets of code. Here for each returned packet, i create a row and add it to a list; DataRow row = DGSource.NewRow(); row["Name"] = deviceName; row["Model"] = deviceModel; row["Location"] = deviceLocation; row["IP"] = finishedIP; row["MAC"] = finishedMac; row["Subnet"] = finishedSubnet; row["Gateway"] = finishedGateway; rowsToAdd.Add(row); Then when the timer elapses; void timerToAddRows_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { timerToAddRows.Enabled = false; try { int count = 0; foreach (DataRow rowToAdd in rowsToAdd) { DGSource.Rows.Add(rowToAdd); count++; } rowsToAdd.Clear(); DGAutoDevices.InvokeEx(f => DGAutoDevices.Refresh()); lblNumberFound.InvokeEx(f => lblNumberFound.Text = count + " new devices found."); } catch { } } So at this point, each row has been added, and i call the re paint, by doing refresh. (Note: i've also tried refreshing the form itself, no avail). However, when the datagrid shows the rows, the scroll bar / datagrid seems to have weird behavour..for example i can't highlight anything with clicks (It's set to full row selection), and the scroll bar looks like so; Calling refresh doesn't work, although if i resize the window even 1 pixel, or minimize and maximise, the problem is solved. Other things to note : The method that get's the packets and adds the rows to the list, and then from the list to the datagrid runs in it's own thread. Any ideas as to something i might be missing here?

    Read the article

  • Implications of trying to double free memory space in C

    - by SidNoob
    Here' my piece of code: #include <stdio.h> #include<stdlib.h> struct student{ char *name; }; int main() { struct student s; s.name = malloc(sizeof(char *)); // I hope this is the right way... printf("Name: "); scanf("%[^\n]", s.name); printf("You Entered: \n\n"); printf("%s\n", s.name); free(s.name); // This will cause my code to break } All I know is that dynamic allocation on the 'heap' needs to be freed. My question is, when I run the program, sometimes the code runs successfully. i.e. ./struct Name: Thisis Myname You Entered: Thisis Myname I tried reading this I've concluded that I'm trying to double-free a piece of memory i.e. I'm trying to free a piece of memory that is already free? (hope I'm correct here. If Yes, what could be the Security Implications of a double-free?) While it fails sometimes as its supposed to: ./struct Name: CrazyFishMotorhead Rider You Entered: CrazyFishMotorhead Rider *** glibc detected *** ./struct: free(): invalid next size (fast): 0x08adb008 *** ======= Backtrace: ========= /lib/tls/i686/cmov/libc.so.6(+0x6b161)[0xb7612161] /lib/tls/i686/cmov/libc.so.6(+0x6c9b8)[0xb76139b8] /lib/tls/i686/cmov/libc.so.6(cfree+0x6d)[0xb7616a9d] ./struct[0x8048533] /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe6)[0xb75bdbd6] ./struct[0x8048441] ======= Memory map: ======== 08048000-08049000 r-xp 00000000 08:01 288098 /root/struct 08049000-0804a000 r--p 00000000 08:01 288098 /root/struct 0804a000-0804b000 rw-p 00001000 08:01 288098 /root/struct 08adb000-08afc000 rw-p 00000000 00:00 0 [heap] b7400000-b7421000 rw-p 00000000 00:00 0 b7421000-b7500000 ---p 00000000 00:00 0 b7575000-b7592000 r-xp 00000000 08:01 788956 /lib/libgcc_s.so.1 b7592000-b7593000 r--p 0001c000 08:01 788956 /lib/libgcc_s.so.1 b7593000-b7594000 rw-p 0001d000 08:01 788956 /lib/libgcc_s.so.1 b75a6000-b75a7000 rw-p 00000000 00:00 0 b75a7000-b76fa000 r-xp 00000000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fa000-b76fc000 r--p 00153000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fc000-b76fd000 rw-p 00155000 08:01 920678 /lib/tls/i686/cmov/libc-2.11.1.so b76fd000-b7700000 rw-p 00000000 00:00 0 b7710000-b7714000 rw-p 00000000 00:00 0 b7714000-b7715000 r-xp 00000000 00:00 0 [vdso] b7715000-b7730000 r-xp 00000000 08:01 788898 /lib/ld-2.11.1.so b7730000-b7731000 r--p 0001a000 08:01 788898 /lib/ld-2.11.1.so b7731000-b7732000 rw-p 0001b000 08:01 788898 /lib/ld-2.11.1.so bffd5000-bfff6000 rw-p 00000000 00:00 0 [stack] Aborted So why is it that my code does work sometimes? i.e. the compiler is not able to detect at times that I'm trying to free an already freed memory. Has it got to do something with my stack/heap size?

    Read the article

  • Unable to read data from the transport connection: the connection was closed

    - by webdreamer
    The exception is Remoting Exception - Authentication Failure. The detailed message says "Unable to read data from the transport connection: the connection was closed." I'm having trouble with creating two simple servers that can comunicate as remote objects in C#. ServerInfo is just a class I created that holds the IP and Port and can give back the address. It works fine, as I used it before, and I've debugged it. Also the server is starting just fine, no exception is thrown, and the channel is registered without problems. I'm using Forms to do the interfaces, and call some of the methods on the server, but didn't find any problems in passing the parameters from the FormsApplication to the server when debugging. All seems fine in that chapter. public ChordServerProgram() { RemotingServices.Marshal(this, "PADIBook"); nodeInt = 0; } public void startServer() { try { serverChannel = new TcpChannel(serverInfo.Port); ChannelServices.RegisterChannel(serverChannel, true); } catch (Exception e) { Console.WriteLine(e.ToString()); } } I run two instances of this program. Then startNode is called on one of the instances of the application. The port is fine, the address generated is fine as well. As you can see, I'm using the IP for localhost, since this server is just for testing purposes. public void startNode(String portStr) { IPAddress address = IPAddress.Parse("127.0.0.1"); Int32 port = Int32.Parse(portStr); serverInfo = new ServerInfo(address, port); startServer(); //node = new ChordNode(serverInfo,this); } Then, in the other istance, through the interface again, I call another startNode method, giving it a seed server to get information from. This is where it goes wrong. When it calls the method on the seedServer proxy it just got, a RemotingException is thrown, due to an authentication failure. (The parameter I'll want to get is the node, I'm just using the int to make sure the ChordNode class has nothing to do with this error.) public void startNode(String portStr, String seedStr) { IPAddress address = IPAddress.Parse("127.0.0.1"); Int32 port = Int32.Parse(portStr); serverInfo = new ServerInfo(address, port); IPAddress addressSeed = IPAddress.Parse("127.0.0.1"); Int32 portSeed = Int32.Parse(seedStr); ServerInfo seedInfo = new ServerInfo(addressSeed, portSeed); startServer(); ChordServerProgram seedServer = (ChordServerProgram)Activator.GetObject(typeof(ChordServerProgram), seedInfo.GetFullAddress()); // node = new ChordNode(serverInfo,this); int seedNode = seedServer.nodeInt; // node.chordJoin(seedNode.self); }

    Read the article

  • howto parse struct to C++ dll from C#

    - by Nerds Rule
    I am trying to call a function in a unmanaged C++ dll. It has this prototype: [DllImport("C:\\Program Files\\MySDK\\VSeries.dll", EntryPoint = "BII_Send_Index_Template_MT" )] internal unsafe static extern Int32 BII_Send_Index_Template_MT(IntPtr pUnitHandle, ref BII_Template template, Int32 option, Boolean async); BII_Template template = new BII_Template(); error_code = BII_Send_Index_Template_MT(pUnitHandle, ref template, option, false); I is how I define the BII_Template struct in C#: public unsafe struct BII_Template { public ulong id; public ulong employee_id; public ulong password; public byte sensor_version; public byte template_version; public fixed char name[16]; public byte finger; public byte admin_level; public byte schedule; public byte security_thresh; public fixed byte noise_level[18]; public byte corramb; public byte reference_x; public byte reference_y; public fixed byte ihcore[3]; public fixed byte ivcore[3]; public byte temp_xoffset; public byte temp_yoffset; public byte index; public fixed byte inphase[5500]; }; It build and when I run it the dll return error_code = "The record checksum is invalid." I assume that I am using the ref keyword in a wrong way or the size of some of the elements in the struct is wrong. ----- EDIT ------------ Here is the struct in C++: typedef struct { unsigned long id; unsigned long employee_id; unsigned long password; unsigned char sensor_version; unsigned char template_version; char name[16]; unsigned char finger; unsigned char admin_level; unsigned char schedule; unsigned char security_thresh; unsigned char noise_level[18]; unsigned char corramb ; unsigned char reference_x ; unsigned char reference_y ; unsigned char ihcore[NUM_CORE]; unsigned char ivcore[NUM_CORE]; unsigned char temp_xoffset; unsigned char temp_yoffset; unsigned char index; unsigned char inphase[PACKED_ARRAY_SIZE]; } BII_Template;

    Read the article

  • overloading new/delete problem

    - by hidayat
    This is my scenario, Im trying to overload new and delete globally. I have written my allocator class in a file called allocator.h. And what I am trying to achieve is that if a file is including this header file, my version of new and delete should be used. So in a header file "allocator.h" i have declared the two functions extern void* operator new(std::size_t size); extern void operator delete(void *p, std::size_t size); I the same header file I have a class that does all the allocator stuff, class SmallObjAllocator { ... }; I want to call this class from the new and delete functions and I would like the class to be static, so I have done this: template<unsigned dummy> struct My_SmallObjectAllocatorImpl { static SmallObjAllocator myAlloc; }; template<unsigned dummy> SmallObjAllocator My_SmallObjectAllocatorImpl<dummy>::myAlloc(DEFAULT_CHUNK_SIZE, MAX_OBJ_SIZE); typedef My_SmallObjectAllocatorImpl<0> My_SmallObjectAllocator; and in the cpp file it looks like this: allocator.cc void* operator new(std::size_t size) { std::cout << "using my new" << std::endl; if(size > MAX_OBJ_SIZE) return malloc(size); else return My_SmallObjectAllocator::myAlloc.allocate(size); } void operator delete(void *p, std::size_t size) { if(size > MAX_OBJ_SIZE) free(p); else My_SmallObjectAllocator::myAlloc.deallocate(p, size); } The problem is when I try to call the constructor for the class SmallObjAllocator which is a static object. For some reason the compiler are calling my overloaded function new when initializing it. So it then tries to use My_SmallObjectAllocator::myAlloc.deallocate(p, size); which is not defined so the program crashes. So why are the compiler calling new when I define a static object? and how can I solve it?

    Read the article

  • Rewriting Live TCP/IP (Layer 4) (i.e. Socket Layer) Streams

    - by user213060
    I have a simple problem which I'm sure someone here has done before... I want to rewrite Layer 4 TCP/IP streams (Not lower layer individual packets or frames.) Ettercap's etterfilter command lets you perform simple live replacements of Layer 4 TCP/IP streams based on fixed strings or regexes. Example ettercap scripting code: if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "gzip")) { replace("gzip", " "); msg("whited out gzip\n"); } } if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "deflate")) { replace("deflate", " "); msg("whited out deflate\n"); } } http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? I would like to have a configuration similar to ettercap's silent bridged sniffing configuration between two Ethernet interfaces. This way I can silently filter traffic coming from either direction with no NATing problems. Note that my filter is an application that acts as a pipe filter, similar to the design of unix command-line filters: >[eth0] <----------> [my filter] <----------> [eth1]< What I am already aware of, but are not suitable: Tun/Tap - Works at the lower packet layer, I need to work with the higher layer streams. Ettercap - I can't find any way to do replacements other than the restricted capabilities in the example above. Hooking into some VPN software? - I just can't figure out which or exactly how. libnetfilter_queue - Works with lower layer packets, not TCP/IP streams. Again, the rewriting should occur at the transport layer (Layer 4) as it does in this example, instead of a lower layer packet-based approach. Exact code will help immensely! Thanks!

    Read the article

  • A case-insensitive related implementation problem

    - by Robert
    Hi All, I am going through a final refinement posted by the client, which needs me to do a case-insesitive query. I will basically walk through how this simple program works. First of all, in my Java class, I did a fairly simple webpage parsing: title=(String)results.get("title"); doc = docBuilder.parse("http://" + server + ":" + port + "/exist/rest/db/wb/xql/media_lookup.xql?" + "&title=" + title); This Java statement references an XQuery file "media_lookup.xql" which is stored on localhost, and the only parameter we are passing is the string "title". Secondly, let's take at look at that XQuery file: $title := request:get-parameter('title',""), $mediaNodes := doc('/db/wb/portfolio/media_data.xml'), $query := $mediaNodes//media[contains(title,$title)], Then it will evaluate that query. This XQuery will get the "title" parameter that are passes from our Java class, and query the "media_data" xml file stored in the database, which contains a bunch of media nodes with a 'title' element node. As you may expect, this simple query will just match those media nodes whose 'title' element contains a substring of what the value of string 'title' is. So if our 'title' is "Chi", it will return media nodes whose title may be "Chicago" or "Chicken". The refinment request posted by the client is that there should be NO case-sensitivity. The very intuitive way is to modify the XQuery statement by using a lower-case funtion in it, like: $query := $mediaNodes//media[contains(lower-case(title/text(),lower-case($title))], However, the question comes: this modified query will run my machine into memory overflow. Since my "media_data.xml" is quite huge and contains thouands of millions of media nodes, I assume the lower-case() function will run on each of the entries, thus causing the machine to crash. I've talked with some experienced XQuery programmer, and they think I should use an index to solve this problem, and I will definitely research into that. But before that, I am just posting this problem here to get other ideas or any suggestions, do you think any other way may help? for example, could I tweak the Java parse statement to realize the case-insensitivity? Since I think I saw some people did some string concatination by using "contains." in Java before passing it to the server. Any idea or help is welcomed, thanks in advance.

    Read the article

  • synchronized in java - Proper use

    - by ZoharYosef
    I'm building a simple program to use in multi processes (Threads). My question is more to understand - when I have to use a reserved word synchronized? Do I need to use this word in any method that affects the bone variables? I know I can put it on any method that is not static, but I want to understand more. thank you! here is the code: public class Container { // *** data members *** public static final int INIT_SIZE=10; // the first (init) size of the set. public static final int RESCALE=10; // the re-scale factor of this set. private int _sp=0; public Object[] _data; /************ Constructors ************/ public Container(){ _sp=0; _data = new Object[INIT_SIZE]; } public Container(Container other) { // copy constructor this(); for(int i=0;i<other.size();i++) this.add(other.at(i)); } /** return true is this collection is empty, else return false. */ public synchronized boolean isEmpty() {return _sp==0;} /** add an Object to this set */ public synchronized void add (Object p){ if (_sp==_data.length) rescale(RESCALE); _data[_sp] = p; // shellow copy semantic. _sp++; } /** returns the actual amount of Objects contained in this collection */ public synchronized int size() {return _sp;} /** returns true if this container contains an element which is equals to ob */ public synchronized boolean isMember(Object ob) { return get(ob)!=-1; } /** return the index of the first object which equals ob, if none returns -1 */ public synchronized int get(Object ob) { int ans=-1; for(int i=0;i<size();i=i+1) if(at(i).equals(ob)) return i; return ans; } /** returns the element located at the ind place in this container (null if out of range) */ public synchronized Object at(int p){ if (p>=0 && p<size()) return _data[p]; else return null; }

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • C++ include statement required if defining a map in a headerfile.

    - by Justin
    I was doing a project for computer course on programming concepts. This project was to be completed in C++ using Object Oriented designs we learned throughout the course. Anyhow, I have two files symboltable.h and symboltable.cpp. I want to use a map as the data structure so I define it in the private section of the header file. I #include <map> in the cpp file before I #include "symboltable.h". I get several errors from the compiler (MS VS 2008 Pro) when I go to debug/run the program the first of which is: Error 1 error C2146: syntax error : missing ';' before identifier 'table' c:\users\jsmith\documents\visual studio 2008\projects\project2\project2\symboltable.h 22 Project2 To fix this I had to #include <map> in the header file, which to me seems strange. Here are the relevant code files: // symboltable.h #include <map> class SymbolTable { public: SymbolTable() {} void insert(string variable, double value); double lookUp(string variable); void init(); // Added as part of the spec given in the conference area. private: map<string, double> table; // Our container for variables and their values. }; and // symboltable.cpp #include <map> #include <string> #include <iostream> using namespace std; #include "symboltable.h" void SymbolTable::insert(string variable, double value) { table[variable] = value; // Creates a new map entry, if variable name already exist it overwrites last value. } double SymbolTable::lookUp(string variable) { if(table.find(variable) == table.end()) // Search for the variable, find() returns a position, if thats the end then we didnt find it. throw exception("Error: Uninitialized variable"); else return table[variable]; } void SymbolTable::init() { table.clear(); // Clears the map, removes all elements. }

    Read the article

  • are C functions declared in <c____> headers guaranteed to be in the global namespace as well as std?

    - by Evan Teran
    So this is something that I've always wondered but was never quite sure about. So it is strictly a matter of curiosity, not a real problem. As far as I understand, what you do something like #include <cstdlib> everything (except macros of course) are declared in the std:: namespace. Every implementation that I've ever seen does this by doing something like the following: #include <stdlib.h> namespace std { using ::abort; // etc.... } Which of course has the effect of things being in both the global namespace and std. Is this behavior guaranteed? Or is it possible that an implementation could put these things in std but not in the global namespace? The only way I can think of to do that would be to have your libstdc++ implement every c function itself placing them in std directly instead of just including the existing libc headers (because there is no mechanism to remove something from a namespace). Which is of course a lot of effort with little to no benefit. The essence of my question is, is the following program strictly conforming and guaranteed to work? #include <cstdio> int main() { ::printf("hello world\n"); } EDIT: The closest I've found is this (17.4.1.2p4): Except as noted in clauses 18 through 27, the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C (Clause 7), or ISO/IEC:1990 Programming Languages—C AMENDMENT 1: C Integrity, (Clause 7), as appropriate, as if by inclusion. In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std. which to be honest I could interpret either way. "the contents of each header cname shall be the same as that of the corresponding header name.h, as specified in ISO/IEC 9899:1990 Programming Languages C" tells me that they may be required in the global namespace, but "In the C + + Standard Library, however, the declarations and definitions (except for names which are defined as macros in C) are within namespace scope (3.3.5) of the namespace std." says they are in std (but doesn't specify any other scoped they are in).

    Read the article

  • Copy constructor bug

    - by user168715
    I'm writing a simple nD-vector class, but am encountering a strange bug. I've stripped out the class to the bare minimum that still reproduces the bug: #include <iostream> using namespace std; template<unsigned int size> class nvector { public: nvector() {data_ = new double[size];} ~nvector() {delete[] data_;} template<unsigned int size2> nvector(const nvector<size2> &other) { data_ = new double[size]; int i=0; for(; i<size && i < size2; i++) data_[i] = other[i]; for(; i<size; i++) data_[i] = 0; } double &operator[](int i) {return data_[i];} const double&operator[](int i) const {return data_[i];} private: const nvector<size> &operator=(const nvector<size> &other); //Intentionally unimplemented for now double *data_; }; int main() { nvector<2> vector2d; vector2d[0] = 1; vector2d[1] = 2; nvector<3> vector3d(vector2d); for(int i=0; i<3; i++) cout << vector3d[i] << " "; cout << endl; //Prints 1 2 0 nvector<3> other3d(vector3d); for(int i=0; i<3; i++) cout << other3d[i] << " "; cout << endl; //Prints 1 2 0 } //Segfault??? On the surface this seems to work fine, and both tests print out the correct values. However, at the end of main the program crashes with a segfault, which I've traced to nvector's destructor. At first I thought the (incorrect) default assignment operator was somehow being called, which is why I added the (currently) unimplemented explicit assignment operator to rule this possibility out. So my copy constructor must be buggy, but I'm having one of those days where I'm staring at extremely simple code and just can't see it. Do you guys have any ideas?

    Read the article

  • DFS Backtracking with java

    - by Cláudio Ribeiro
    I'm having problems with DFS backtracking in an adjacency matrix. Here's my code: (i added the test to the main in case someone wants to test it) public class Graph { private int numVertex; private int numEdges; private boolean[][] adj; public Graph(int numVertex, int numEdges) { this.numVertex = numVertex; this.numEdges = numEdges; this.adj = new boolean[numVertex][numVertex]; } public void addEdge(int start, int end){ adj[start-1][end-1] = true; adj[end-1][start-1] = true; } List<Integer> visited = new ArrayList<Integer>(); public Integer DFS(Graph G, int startVertex){ int i=0; if(pilha.isEmpty()) pilha.push(startVertex); for(i=1; i<G.numVertex; i++){ pilha.push(i); if(G.adj[i-1][startVertex-1] != false){ G.adj[i-1][startVertex-1] = false; G.adj[startVertex-1][i-1] = false; DFS(G,i); break; }else{ visited.add(pilha.pop()); } System.out.println("Stack: " + pilha); } return -1; } Stack<Integer> pilha = new Stack(); public static void main(String[] args) { Graph g = new Graph(6, 9); g.addEdge(1, 2); g.addEdge(1, 5); g.addEdge(2, 4); g.addEdge(2, 5); g.addEdge(2, 6); g.addEdge(3, 4); g.addEdge(3, 5); g.addEdge(4, 5); g.addEdge(6, 4); g.DFS(g, 1); } } I'm trying to solve the euler path problem. the program solves basic graphs but when it needs to backtrack, it just does not do it. I think the problem might be in the stack manipulations or in the recursive dfs call. I've tried a lot of things, but still can't seem to figure out why it does not backtrack. Can somebody help me ?

    Read the article

  • Basic data alignment question

    - by Broken Logic
    I've been playing around to see how my computer works under the hood. What I'm interested in is seeing is what happens on the stack inside a function. To do this I've written the following toy program: #include <stdio.h> void __cdecl Test1(char a, unsigned long long b, char c) { char c1; unsigned long long b1; char a1; c1 = 'b'; b1 = 4; a1 = 'r'; printf("%d %d - %d - %d %d Total: %d\n", (long)&b1 - (long)&a1, (long)&c1 - (long)&b1, (long)&a - (long)&c1, (long)&b - (long)&a, (long)&c - (long)&b, (long)&c - (long)&a1 ); }; struct TestStruct { char a; unsigned long long b; char c; }; void __cdecl Test2(char a, unsigned long long b, char c) { TestStruct locals; locals.a = 'b'; locals.b = 4; locals.c = 'r'; printf("%d %d - %d - %d %d Total: %d\n", (long)&locals.b - (long)&locals.a, (long)&locals.c - (long)&locals.b, (long)&a - (long)&locals.c, (long)&b - (long)&a, (long)&c - (long)&b, (long)&c - (long)&locals.a ); }; int main() { Test1('f', 0, 'o'); Test2('f', 0, 'o'); return 0; } And this spits out the following: 9 19 - 13 - 4 8 Total: 53 8 8 - 24 - 4 8 Total: 52 The function args are well behaved but as the calling convention is specified, I'd expect this. But the local variables are a bit wonky. My question is, why wouldn't these be the same? The second call seems to produce a more compact and better aligned stack. Looking at the ASM is unenlightening (at least to me), as the variable addresses are still aliased there. So I guess this is really a question about the assembler itself allocates the stack to local variables. I realise that any specific answer is likely to be platform specific. I'm more interested in a general explanation unless this quirk really is platform specific. For the record though, I'm compiling with VS2010 on a 64bit Intel machine.

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • C++ Stack Overflow

    - by PhilMAN
    Here is some code: void main() { GameEngine ge("phil", "anotherguy"); string response; do { ge.playGame(); cout << endl << "Do you want to (r)eplay the same battle, (s)tart a new battle, or (q)uit? "; cin >> response; } while(response == "r" || response == "R" || response == "s" || response == "S" ); } GameEngine::GameEngine(string name1, string name2) { p1Name = name1; p2Name = name2; } void GameEngine::playGame() { cout << "PLAY GAME" << endl; Army p1, p2; Battlefield testField; RuleSet rs; int xSize = 13; // Number of rows int ySize = 13; // Number of columns loadData(p1, p2, testField, rs, xSize, ySize); ... } void GameEngine::loadData(Army& p1, Army& p2, Battlefield& testField, RuleSet& rs, int& xSize, int& ySize) { string terrain = BattlefieldUtils::pickTerrain(); string armySplit[14];//id index 1 string ruleSplit[19];//in index 7 string armyP1, armyP2, ruleSet; Skill p1Skills[8]; Skill p2Skills[8]; CreatureStack p1Stacks[20]; CreatureStack p2Stacks[20]; ... } CreatureStack(){quantity = 0; isLive = false; id = -1;}; Army(){}; Battlefield(){}; RuleSet(){}; I have posted every line of code that executes until the program crashes. This code ran fine for a long time, I added some stuff that does not even execute until way after the code I have posted here, and bam stack overflow that occurs at GameEngine::loadData() line: CreatureStack p2Stacks[20]; will not go away. What am I doing wrong here? Is that all the stack can handle? I increased the stack size in Visual Studio and got the error to go away, but that slowed things down considerably, so I would rather just get to the source of the issue and fix that.

    Read the article

  • Updating Textures on Runtime in OpenSceneGraph

    - by Abhishek Bansal
    I am working on a project in which i am required to capture frames from external device video and render them on openSceneGraph Node. I am also using GLSL shaders. But i dont know how to update textures on runtime. For other uniforms we need to make callbacks but do we also need to make callbacks for samplers in glsl and openSceneGraph ? My code looks like this. All i am getting right now is a black window. osg::ref_ptr<osg::Geometry> pictureQuad = osg::createTexturedQuadGeometry(osg::Vec3(0.0f,0.0f,0.0f), osg::Vec3(_deviceNameToImageFrameMap[deviceName].frame->s(),0.0f,0.0f), osg::Vec3(0.0f,0.0f,_deviceNameToImageFrameMap[deviceName].frame->t()), 0.0f, 1.0f,_deviceNameToImageFrameMap[deviceName].frame->s(), _deviceNameToImageFrameMap[deviceName].frame->t()); //creating texture and setting up parameters for video frame osg::ref_ptr<osg::TextureRectangle> myTex= new osg::TextureRectangle(_deviceNameToImageFrameMap[deviceName].frame.get()); myTex->setFilter(osg::Texture::MIN_FILTER,osg::Texture::LINEAR); myTex->setFilter(osg::Texture::MAG_FILTER,osg::Texture::LINEAR); myTex->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE); myTex->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE); _videoSourceNameToNodeMap[sourceName].geode = new osg::Geode(); _videoSourceNameToNodeMap[sourceName].geode->setDataVariance(osg::Object::DYNAMIC); _videoSourceNameToNodeMap[sourceName].geode->addDrawable(pictureQuad.get()); //apply texture to node _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setTextureAttributeAndModes(0, myTex.get(), osg::StateAttribute::ON); _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::OFF); _videoSourceNameToNodeMap[sourceName].geode->setDataVariance(osg::Object::DYNAMIC); //Set uniform sampler osg::Uniform* srcFrame = new osg::Uniform( osg::Uniform::SAMPLER_2D, "srcFrame" ); srcFrame->set(0); //Set Uniform Alpha osg::Uniform* alpha = new osg::Uniform( osg::Uniform::FLOAT, "alpha" ); alpha->set(.5f); alpha->setUpdateCallback(new ExampleCallback()); //Enable blending _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setMode( GL_BLEND, osg::StateAttribute::ON); //Adding blend function to node osg::BlendFunc *bf = new osg::BlendFunc(); bf->setFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setAttributeAndModes(bf); //apply shader to quad _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->setAttributeAndModes(program, osg::StateAttribute::ON); //add Uniform to shader _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->addUniform( srcFrame ); _videoSourceNameToNodeMap[sourceName].geode->getOrCreateStateSet()->addUniform( alpha );

    Read the article

  • What is the best way to return two values from a method?

    - by Edward Tanguay
    When I have to write methods which return two values, I usually go about it as in the following code which returns a List<string>. Or if I have to return e.g. a id and string, then I return a List<object> and then pick them out with index number and recast the values. This recasting and referencing by index seems inelegant so I want to develop a new habit for methods that return two values. What is the best pattern for this? using System; using System.Collections.Generic; using System.Linq; namespace MultipleReturns { class Program { static void Main(string[] args) { string extension = "txt"; { List<string> entries = GetIdCodeAndFileName("first.txt", extension); Console.WriteLine("{0}, {1}", entries[0], entries[1]); } { List<string> entries = GetIdCodeAndFileName("first", extension); Console.WriteLine("{0}, {1}", entries[0], entries[1]); } Console.ReadLine(); } /// <summary> /// gets "first.txt", "txt" and returns "first", "first.txt" /// gets "first", "txt" and returns "first", "first.txt" /// it is assumed that extensions will always match /// </summary> /// <param name="line"></param> public static List<string> GetIdCodeAndFileName(string line, string extension) { if (line.Contains(".")) { List<string> parts = line.BreakIntoParts("."); List<string> returnItems = new List<string>(); returnItems.Add(parts[0]); returnItems.Add(line); return returnItems; } else { List<string> returnItems = new List<string>(); returnItems.Add(line); returnItems.Add(line + "." + extension); return returnItems; } } } public static class StringHelpers { public static List<string> BreakIntoParts(this string line, string separator) { if (String.IsNullOrEmpty(line)) return null; else { return line.Split(new string[] { separator }, StringSplitOptions.None).Select(p => p.Trim()).ToList(); } } } }

    Read the article

  • Refactoring Singleton Overuse

    - by drharris
    Today I had an epiphany, and it was that I was doing everything wrong. Some history: I inherited a C# application, which was really just a collection of static methods, a completely procedural mess of C# code. I refactored this the best I knew at the time, bringing in lots of post-college OOP knowledge. To make a long story short, many of the entities in code have turned out to be Singletons. Today I realized I needed 3 new classes, which would each follow the same Singleton pattern to match the rest of the software. If I keep tumbling down this slippery slope, eventually every class in my application will be Singleton, which will really be no logically different from the original group of static methods. I need help on rethinking this. I know about Dependency Injection, and that would generally be the strategy to use in breaking the Singleton curse. However, I have a few specific questions related to this refactoring, and all about best practices for doing so. How acceptable is the use of static variables to encapsulate configuration information? I have a brain block on using static, and I think it is due to an early OO class in college where the professor said static was bad. But, should I have to reconfigure the class every time I access it? When accessing hardware, is it ok to leave a static pointer to the addresses and variables needed, or should I continually perform Open() and Close() operations? Right now I have a single method acting as the controller. Specifically, I continually poll several external instruments (via hardware drivers) for data. Should this type of controller be the way to go, or should I spawn separate threads for each instrument at the program's startup? If the latter, how do I make this object oriented? Should I create classes called InstrumentAListener and InstrumentBListener? Or is there some standard way to approach this? Is there a better way to do global configuration? Right now I simply have Configuration.Instance.Foo sprinkled liberally throughout the code. Almost every class uses it, so perhaps keeping it as a Singleton makes sense. Any thoughts? A lot of my classes are things like SerialPortWriter or DataFileWriter, which must sit around waiting for this data to stream in. Since they are active the entire time, how should I arrange these in order to listen for the events generated when data comes in? Any other resources, books, or comments about how to get away from Singletons and other pattern overuse would be helpful.

    Read the article

  • Memory leaks getting sub-images from video (cvGetSubRect)

    - by dnul
    Hi, i'm trying to do video windowing that is: show all frames from a video and also some sub-image from each frame. This sub-image can change size and be taken from a different position of the original frame. So , the code i've written does basically this: cvQueryFrame to get a new image from the video Create a new IplImage (img) with sub-image dimensions ( window.height,window.width) Create a new Cvmat (mat) with sub-image dimensions ( window.height,window.width) CvGetSubRect(originalImage,mat,window) seizes the sub-image transform Mat (cvMat) to img (IplImage) using cvGetImage my problem is that for each frame i create new IplImage and cvMat which take a lot of memory and when i try to free the allocated memory I get a segmentation fault or in the case of the CvMat the allocated space does not get free (valgrind keeps telling me its definetly lost space). the following code does it: int main(void){ CvCapture* capture; CvRect window; CvMat * tmp; //window size window.x=0;window.y=0;window.height=100;window.width=100; IplImage * src=NULL,*bk=NULL,* sub=NULL; capture=cvCreateFileCapture( "somevideo.wmv"); while((src=cvQueryFrame(capture))!=NULL){ cvShowImage("common",src); //get sub-image sub=cvCreateImage(cvSize(window.height,window.width),8,3); tmp =cvCreateMat(window.height, window.width,CV_8UC1); cvGetSubRect(src, tmp , window); sub=cvGetImage(tmp, sub); cvShowImage("Window",sub); //free space if(bk!=NULL) cvReleaseImage(&bk); bk=sub; cvReleaseMat(&tmp); cvWaitKey(20); //window dimensions changes window.width++; window.height++; } } cvReleaseMat(&tmp); does not seem to have any effect on the total amount of lost memory, valgrind reports the same amount of "definetly lost" memory if i comment or uncomment this line. cvReleaseImage(&bk); produces a segmentation fault. notice i'm trying to free the previous sub-frame which i'm backing up in the bk variable. If i comment this line the program runs smoothly but with lots of memory leaks I really need to get rid of memory leaks, can anyone explain me how to correct this or even better how to correctly perform image windowing? Thank you

    Read the article

  • C# MultiThread Safe Class Design

    - by Robert
    I'm trying to designing a class and I'm having issues with accessing some of the nested fields and I have some concerns with how multithread safe the whole design is. I would like to know if anyone has a better idea of how this should be designed or if any changes that should be made? using System; using System.Collections; namespace SystemClass { public class Program { static void Main(string[] args) { System system = new System(); //Seems like an awkward way to access all the members dynamic deviceInstance = (((DeviceType)((DeviceGroup)system.deviceGroups[0]).deviceTypes[0]).deviceInstances[0]); Boolean checkLocked = deviceInstance.locked; //Seems like this method for accessing fields might have problems with multithreading foreach (DeviceGroup dg in system.deviceGroups) { foreach (DeviceType dt in dg.deviceTypes) { foreach (dynamic di in dt.deviceInstances) { checkLocked = di.locked; } } } } } public class System { public ArrayList deviceGroups = new ArrayList(); public System() { //API called to get names of all the DeviceGroups deviceGroups.Add(new DeviceGroup("Motherboard")); } } public class DeviceGroup { public ArrayList deviceTypes = new ArrayList(); public DeviceGroup() {} public DeviceGroup(string deviceGroupName) { //API called to get names of all the Devicetypes deviceTypes.Add(new DeviceType("Keyboard")); deviceTypes.Add(new DeviceType("Mouse")); } } public class DeviceType { public ArrayList deviceInstances = new ArrayList(); public bool deviceConnected; public DeviceType() {} public DeviceType(string DeviceType) { //API called to get hardwareIDs of all the device instances deviceInstances.Add(new Mouse("0001")); deviceInstances.Add(new Keyboard("0003")); deviceInstances.Add(new Keyboard("0004")); //Start thread CheckConnection that updates deviceConnected periodically } public void CheckConnection() { //API call to check connection and returns true this.deviceConnected = true; } } public class Keyboard { public string hardwareAddress; public bool keypress; public bool deviceConnected; public Keyboard() {} public Keyboard(string hardwareAddress) { this.hardwareAddress = hardwareAddress; //Start thread to update deviceConnected periodically } public void CheckKeyPress() { //if API returns true this.keypress = true; } } public class Mouse { public string hardwareAddress; public bool click; public Mouse() {} public Mouse(string hardwareAddress) { this.hardwareAddress = hardwareAddress; } public void CheckClick() { //if API returns true this.click = true; } } }

    Read the article

  • Basic drawing with Quartz 2D on iPhone

    - by wwrob
    My goal is to make a program that will draw points whenever the screen is touched. This is what I have so far: The header file: #import <UIKit/UIKit.h> @interface ElSimView : UIView { CGPoint firstTouch; CGPoint lastTouch; UIColor *pointColor; CGRect *points; int npoints; } @property CGPoint firstTouch; @property CGPoint lastTouch; @property (nonatomic, retain) UIColor *pointColor; @property CGRect *points; @property int npoints; @end The implementation file: //@synths etc. - (id)initWithFrame:(CGRect)frame { return self; } - (id)initWithCoder:(NSCoder *)coder { if(self = [super initWithCoder:coder]) { self.npoints = 0; } return self; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; firstTouch = [touch locationInView:self]; lastTouch = [touch locationInView:self]; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; lastTouch = [touch locationInView:self]; points = (CGRect *)malloc(sizeof(CGRect) * ++npoints); points[npoints-1] = CGRectMake(lastTouch.x-15, lastTouch.y-15,30,30); [self setNeedsDisplay]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; lastTouch = [touch locationInView:self]; [self setNeedsDisplay]; } - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor blackColor].CGColor); CGContextSetFillColorWithColor(context, pointColor.CGColor); for(int i=0; i<npoints; i++) CGContextAddEllipseInRect(context, points[i]); CGContextDrawPath(context, kCGPathFillStroke); } - (void)dealloc { free(points); [super dealloc]; } @end When I load this and click some points, it draws the first points normally, then then next points are drawn along with random ellipses (not even circles). Also I have another question: When is exactly drawRect executed?

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

< Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >