Search Results

Search found 10366 results on 415 pages for 'const char pointer'.

Page 345/415 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Optional Member Objects

    - by David Relihan
    Okay, so you have a load of methods sprinkled around your systems main class. So you do the right thing and refactor by creating a new class and perform move method(s) into a new class. The new class has a single responsibility and all is right with the world again: class Feature { public: Feature(){}; void doSomething(); void doSomething1(); void doSomething2(); }; So now your original class has a member variable of type object: Feature _feature; Which you will call in the main class. Now if you do this many times, you will have many member-objects in your main class. Now these features may or not be required based on configuration so in a way it's costly having all these objects that may or not be needed. Can anyone suggest a way of improving this? At the moment I plan to test in the newly created class if the feature is enabled - so the when a call is made to method I will return if it is not enabled. I could have a pointer to the object and then only call new if feature is enabled - but this means I will have to test before I call a method on it which would be potentially dangerous and not very readable. Would having an auto_ptr to the object improve things: auto_ptr<Feature> feature; Or am I still paying the cost of object invokation even though the object may\or may not be required. BTW - I don't think this is premeature optimisation - I just want to consider the possibilites.

    Read the article

  • Mysql error in stored procudure

    - by devuser
    This stored procedure is to search through all tables and columns in database. DELIMITER $$ DROP PROCEDURE IF EXISTS get_table $$ CREATE /*[DEFINER = { user | CURRENT_USER }]*/ PROCEDURE `auradoxdb`.`get_table`(in_search varchar(50)) READS SQL DATA BEGIN DECLARE trunc_cmd VARCHAR(50); DECLARE search_string VARCHAR(250); DECLARE db,tbl,clmn CHAR(50); DECLARE done INT DEFAULT 0; DECLARE COUNTER INT; DECLARE table_cur CURSOR FOR SELECT concat(SELECT COUNT(*) INTO @CNT_VALUE FROM `’,table_schema,’`.`’,table_name,’` WHERE `’, column_name,’` REGEXP ”’,in_search,”’) ,table_schema,table_name,column_name FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN (‘information_schema’,'test’,'mysql’); DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; # #Truncating table for refill the data for new search. PREPARE trunc_cmd FROM “TRUNCATE TABLE temp_details;” EXECUTE trunc_cmd ; OPEN table_cur; table_loop:LOOP FETCH table_cur INTO search_string,db,tbl,clmn; # #Executing the search SET @search_string = search_string; SELECT search_string; PREPARE search_string FROM @search_string; EXECUTE search_string; SET COUNTER = @CNT_VALUE; SELECT COUNTER; IF COUNTER>0 THEN # # Inserting required results from search to tablehhh INSERT INTO temp_details VALUES(db,tbl,clmn); END IF; IF done=1 THEN LEAVE table_loop; END IF; END LOOP; CLOSE table_cur; # #Finally Show Results SELECT * FROM temp_details; END $$ DELIMITER ; Following error occurs when execute this.. Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT COUNT(*) INTO @CNT_VALUE FROM `’,table_schema,’`.`’,table_name,’`' at line 12 (0 ms taken) could any body please help me to solve this?

    Read the article

  • Creating a QMainWindow from Java using JNI

    - by ebasconp
    Hi everybody: I'm trying to create a Qt main windows from Java using JNI directly and I got a threading error. My code looks like this: Test class: public class Test { public static void main(String... args) { System.out.println(System.getProperty("java.library.path")); TestWindow f = new TestWindow(); f.show(); } } TestWindow class: public class TestWindow { static { System.loadLibrary("mylib"); } public native void show(); } C++ impl: void JNICALL Java_testpackage_TestWindow_show (JNIEnv *, jobject) { int c = 0; char** a = NULL; QApplication* app = new QApplication(c, a); QMainWindow* mw = new QMainWindow(); mw->setWindowTitle("Hello"); mw->setGeometry(150, 150, 400, 300); mw->show(); QApplication::exec(); } and I get my window painted but frozen (it does not receive any event) and the following error message when instantiating the QMainWindow object: QCoreApplication::sendPostedEvents: Cannot send posted events for objects in another thread I know all the UI operations must done in the UI thread but in my example I created the QApplication in the only thread I have running, so, everything should work properly. I did some tests executing the code of my "show" method from a QMetaObject::invokeMethod stuff using Qt::QueuedConnection but nothing works properly. I know I could use Jambi... but I know that it could be done natively too and that is what I want to do :) Any ideas on this? Thanks in advance! Ernesto

    Read the article

  • Pointers in C# to make int array?

    - by Joshua
    The following C++ program compiles and runs as expected: #include <stdio.h> int main(int argc, char* argv[]) { int* test = new int[10]; for (int i = 0; i < 10; i++) test[i] = i * 10; printf("%d \n", test[5]); // 50 printf("%d \n", 5[test]); // 50 return getchar(); } The closest C# simple example I could make for this question is: using System; class Program { unsafe static int Main(string[] args) { // error CS0029: Cannot implicitly convert type 'int[]' to 'int*' int* test = new int[10]; for (int i = 0; i < 10; i++) test[i] = i * 10; Console.WriteLine(test[5]); // 50 Console.WriteLine(5[test]); // Error return (int)Console.ReadKey().Key; } } So how do I make the pointer?

    Read the article

  • Template operator linker error

    - by Dani
    I have a linker error I've reduced to a simple example. The build output is: debug/main.o: In function main': C:\Users\Dani\Documents\Projects\Test1/main.cpp:5: undefined reference tolog& log::operator<< (char const (&) [6])' collect2: ld returned 1 exit status It looks like the linker ignores the definition in log.cpp. I also cant put the definition in log.h because I include the file alot of times and it complains about redefinitions. main.cpp: #include "log.h" int main() { log() << "hello"; return 0; } log.h: #ifndef LOG_H #define LOG_H class log { public: log(); template<typename T> log &operator <<(T &t); }; #endif // LOG_H log.cpp: #include "log.h" #include <iostream> log::log() { } template<typename T> log &log::operator <<(T &t) { std::cout << t << std::endl; return *this; }

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • If the address of a function can not be resolved during deduction, is it SFINAE or a compiler error?

    - by Faisal Vali
    In C++0x SFINAE rules have been simplified such that any invalid expression or type that occurs in the "immediate context" of deduction does not result in a compiler error but rather in deduction failure (SFINAE). My question is this: If I take the address of an overloaded function and it can not be resolved, is that failure in the immediate-context of deduction? (i.e is it a hard error or SFINAE if it can not be resolved)? Here is some sample code: struct X { // template T* foo(T,T); // lets not over-complicate things for now void foo(char); void foo(int); }; template struct S { template struct size_map { typedef int type; }; // here is where we take the address of a possibly overloaded function template void f(T, typename size_map::type* = 0); void f(...); }; int main() { S s; // should this cause a compiler error because 'auto T = &X::foo' is invalid? s.f(3); } Gcc 4.5 states that this is a compiler error, and clang spits out an assertion violation. Here are some more related questions of interest: Does the FCD-C++0x clearly specify what should happen here? Are the compilers wrong in rejecting this code? Does the "immediate-context" of deduction need to be defined a little better? Thanks!

    Read the article

  • Weird Qt SSL issue -- error "No Error" shows up, nothing else, and if I ignore it, everything works

    - by houbysoft
    The issue is as follows : in my Qt app, I have a QWebView, which I use to load a HTTPS page. Everything worked fine on my development machine, so I'm now trying to get it to run on a test machine. I ran the app, but the page didn't load (the QWebView was blank). After much debugging, I found the problem is that an SSL error shows up, and the sslErrors() signal is fired. Here is my sslErrors() handling code: void blah::sslErrors(QNetworkReply *reply, const QList<QSslError> &errors) { foreach(QSslError error, errors) { qDebug() << error.errorString() << endl; } reply->ignoreSslErrors(); } The only thing the above code prints is: "No error" So there's no error, but unless I call reply->ignoreSslErrors(), the page doesn't load (on the test machine, on my developer computer no error is reported). Huh? Is this a bug? Is it safe to ignore the error, if I make sure it's of the type "No error"?

    Read the article

  • Suggestions of the easiest algorithms for some Graph operations

    - by Nazgulled
    Hi, The deadline for this project is closing in very quickly and I don't have much time to deal with what it's left. So, instead of looking for the best (and probably more complicated/time consuming) algorithms, I'm looking for the easiest algorithms to implement a few operations on a Graph structure. The operations I'll need to do is as follows: List all users in the graph network given a distance X List all users in the graph network given a distance X and the type of relation Calculate the shortest path between 2 users on the graph network given a type of relation Calculate the maximum distance between 2 users on the graph network Calculate the most distant connected users on the graph network A few notes about my Graph implementation: The edge node has 2 properties, one is of type char and another int. They represent the type of relation and weight, respectively. The Graph is implemented with linked lists, for both the vertices and edges. I mean, each vertex points to the next one and each vertex also points to the head of a different linked list, the edges for that specific vertex. What I know about what I need to do: I don't know if this is the easiest as I said above, but for the shortest path between 2 users, I believe the Dijkstra algorithm is what people seem to recommend pretty often so I think I'm going with that. I've been searching and searching and I'm finding it hard to implement this algorithm, does anyone know of any tutorial or something easy to understand so I can implement this algorithm myself? If possible, with C source code examples, it would help a lot. I see many examples with math notations but that just confuses me even more. Do you think it would help if I "converted" the graph to an adjacency matrix to represent the links weight and relation type? Would it be easier to perform the algorithm on that instead of the linked lists? I could easily implement a function to do that conversion when needed. I'm saying this because I got the feeling it would be easier after reading a couple of pages about the subject, but I could be wrong. I don't have any ideas about the other 4 operations, suggestions?

    Read the article

  • How to modulate every texture unit in OpenGL ES 1.1?

    - by Jesse Beder
    I have two textures and a "blend factor", and I'd like to mix them, modulated by the current color; in effect, I want to use the following shader: gl_FragColor = gl_Color * mix(tex0, tex1, blendFactor); I'm using OpenGL ES 1.1, targeting all versions of the iPhone, so I can't use shaders, and I have two texture units. My best attempt is: // texture 0 glActiveTexture(GL_TEXTURE0); image1.Bind(); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); // texture 1 glActiveTexture(GL_TEXTURE1); image2.Bind(); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_INTERPOLATE); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_RGB, GL_PREVIOUS); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_RGB, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_RGB, GL_CONSTANT); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_RGB, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_ALPHA, GL_INTERPOLATE); glTexEnvi(GL_TEXTURE_ENV, GL_SRC0_ALPHA, GL_PREVIOUS); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_SRC1_ALPHA, GL_TEXTURE); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND1_ALPHA, GL_SRC_ALPHA); glTexEnvi(GL_TEXTURE_ENV, GL_SRC2_ALPHA, GL_CONSTANT); glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND2_ALPHA, GL_SRC_ALPHA); const float factor[] = { 0, 0, 0, blendFactor }; glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, factor); This has the effect of the shader: gl_FragColor = mix(gl_Color * tex0, tex1, blendFactor); but I don't see how to module texture 1 by the color. Is there any way to have the color provided by a texture unit automatically modulated by the incoming primary color? Or any other way to do what I want that I'm missing? Multiple passes are definitely allowed, but they have to have the proper blend effect; I have glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); enabled, so it can be tricky to get right with multiple passes.

    Read the article

  • SDL with OpenGL (freeglut) crashes on call to glutBitmapCharacter

    - by stett
    I have a program using OpenGL through freeglut under SDL. The SDL/OpenGL initialization is as follows: // Initialize SDL SDL_Init(SDL_INIT_VIDEO); // Create the SDL window SDL_SetVideoMode(SCREEN_W, SCREEN_H, SCREEN_DEPTH, SDL_OPENGL); // Initialize OpenGL glClearColor(BG_COLOR_R, BG_COLOR_G, BG_COLOR_B, 1.f); glViewport(0, 0, SCREEN_W, SCREEN_H); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0f, SCREEN_W, SCREEN_H, 0.0f, -1.0f, 1.0f); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); I've been using glBegin() ... glEnd() blocks without any trouble to draw primitives. However, in this program when I call any glutBitmapX function, the program simply exits without an error status. The code I'm using to draw text is: glColor3f(1.f, 1.f, 1.f); glRasterPos2f(x, y); glutStrokeString(GLUT_BITMAP_8_BY_13, (const unsigned char*)"test string"); In previous similar programs I've used glutBitmapCharacter and glutStrokeString to draw text and it's seemed to work. The only difference being that I'm using freeglut with SDL now instead of just GLUT as I did in previous programs. Is there some fundamental problem with my setup that I'm not seeing, or is there a better way of drawing text?

    Read the article

  • Why this Either-monad code does not type check?

    - by pf_miles
    instance Monad (Either a) where return = Left fail = Right Left x >>= f = f x Right x >>= _ = Right x this code frag in 'baby.hs' caused the horrible compilation error: Prelude> :l baby [1 of 1] Compiling Main ( baby.hs, interpreted ) baby.hs:2:18: Couldn't match expected type `a1' against inferred type `a' `a1' is a rigid type variable bound by the type signature for `return' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the expression: Left In the definition of `return': return = Left In the instance declaration for `Monad (Either a)' baby.hs:3:16: Couldn't match expected type `[Char]' against inferred type `a1' `a1' is a rigid type variable bound by the type signature for `fail' at <no location info> Expected type: String Inferred type: a1 In the expression: Right In the definition of `fail': fail = Right baby.hs:4:26: Couldn't match expected type `a1' against inferred type `a' `a1' is a rigid type variable bound by the type signature for `>>=' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the first argument of `f', namely `x' In the expression: f x In the definition of `>>=': Left x >>= f = f x baby.hs:5:31: Couldn't match expected type `b' against inferred type `a' `b' is a rigid type variable bound by the type signature for `>>=' at <no location info> `a' is a rigid type variable bound by the instance declaration at baby.hs:1:23 In the first argument of `Right', namely `x' In the expression: Right x In the definition of `>>=': Right x >>= _ = Right x Failed, modules loaded: none. why this happen? and how could I make this code compile ? thanks for any help~

    Read the article

  • Better way of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • server/ client server connection

    - by user312054
    I have a server side program that creates a listening server side socket. The problem occurring is that it seems as if the client side sends a connect request it gets rejected if the server side socket is listening but connects if the server side program is not running. I can see the server side program getting the client request when debugging. It seems as if the client cannot connect to a listening socket. Any suggestions on a resolution? The server side accept code snippet is this. void CSocketListen::OnAccept(int nErrorCode) { CSocket::OnAccept(nErrorCode); CSocketServer* SocketPtr = new CSocketServer(); if (Accept(*SocketPtr)) { // add to list of client sockets connected } else { delete SocketPtr; } The client side code connect is like this. SOCKET cellModem; sockaddr_in handHeld; handHeld.sin_family = AF_INET; //Address family handHeld.sin_addr.s_addr = inet_addr("127.0.0.1"); handHeld.sin_port = htons((u_short)1113); //port to use cellModem=socket(AF_INET,SOCK_STREAM,0); if(cellModem == INVALID_SOCKET) { // log socket failure return false; } else { // log socket success } if (connect(cellModem,(const struct sockaddr*)&handHeld, sizeof(handHeld)) != 0 ) { // log socket connection success } else { // log socket connection failure closesocket(cellModem); }

    Read the article

  • How to make gcc on SUN calculate floating points the same way as in Linux

    - by Marina
    I have a project where I have to perform some mathematics calculations with double variables. The problem is that I get different results on SUN Solaris 9 and Linux. There are a lot of ways (explained here and other forums) how to make Linux work as Sun, but not the other way around. I cannot touch the Linux code, so it is only SUN I can change. Is there any way to make SUN to behave as Linux? The code I run(compile with gcc on both systems): int hash_func(char *long_id) { double product, lnum, gold; while (*long_id) lnum = lnum * 10.0 + (*long_id++ - '0'); printf("lnum => %20.20f\n", lnum); lnum = lnum * 10.0E-8; printf("lnum => %20.20f\n", lnum); gold = 0.6125423371582974; product = lnum * gold; printf("product => %20.20f\n", product); ... } if the input is 339886769243483 the output in Linux: lnum => 339886769243**483**.00000000000000000000 lnum => 33988676.9243**4829473495483398** product => 20819503.600158**59827399253845** When on SUN: lnum => 339886769243483.00000000000000000000 lnum => 33988676.92434830218553543091 product = 20819503.600158**60199928283691** Note: The result is not always different, moreover most of the times it is the same. Just 10 15-digit numbers out of 60000 have this problem. Please help!!!

    Read the article

  • {DCC Warning} W1036 Variable '$frame' might not have been initialized?

    - by Gad D Lord
    Any ideas why I get this warning in Delphi XE: [DCC Warning] Form1.pas(250): W1036 Variable '$frame' might not have been initialized procedure TForm1.Action1Execute(Sender: TObject); var Thread: TThread; begin ... Thread := TThread.CreateAnonymousThread( procedure{Anonymos}() procedure ShowLoading(const Show: Boolean); begin /// <------------- WARNING IS GIVEN FOR THIS LINE (line number 250) Thread.Synchronize(Thread, procedure{Anonymous}() begin ... Button1.Enabled := not Show; ... end ); end; var i: Integer; begin ShowLoading(true); try Thread.Synchronize(Thread, procedure{Anonymous}() begin ... // some UI updates end Thread.Synchronize(Thread, procedure{Anonymous}() begin ... // some UI updates end ); finally ShowLoading(false); end; end ).NameThread('Some Thread Name'); Thread.Start; end; I do not have anywhere in my code a variable names frame nor $frame. I am even not sure how $frame with $ sign can be a valid identifier. Smells like compiler magic to me. PS: Of course the real life xosw is having other than Form1, Button1, Action1 names.

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Optimizing C# code in MVC controller

    - by cc0
    I am making a number of distinct controllers, one relating to each stored procedure in a database. These are only used to read data and making them available in JSON format for javascripts. My code so far looks like this, and I'm wondering if I have missed any opportunities to re-use code, maybe make some help classes. I have way too little experience doing OOP, so any help and suggestions here would be really appreciated. Here is my generalized code so far (tested and works); using System; using System.Configuration; using System.Web.Mvc; using System.Data; using System.Text; using System.Data.SqlClient; using Prototype.Models; namespace Prototype.Controllers { public class NameOfStoredProcedureController : Controller { char[] lastComma = { ',' }; String oldChar = "\""; String newChar = "&quot;"; StringBuilder json = new StringBuilder(); private String strCon = ConfigurationManager.ConnectionStrings["SomeConnectionString"].ConnectionString; private SqlConnection con; public StoredProcedureController() { con = new SqlConnection(strCon); } public string do_NameOfStoredProcedure(int parameter) { con.Open(); using (SqlCommand cmd = new SqlCommand("NameOfStoredProcedure", con)) { cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.AddWithValue("@parameter", parameter); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { json.AppendFormat("[{0},\"{1}\"],", reader["column1"], reader["column2"]); } } con.Close(); } if (json.Length.ToString().Equals("0")) { return "[]"; } else { return "[" + json.ToString().TrimEnd(lastComma) + "]"; } } //http://host.com/NameOfStoredProcedure?parameter=value public ActionResult Index(int parameter) { return new ContentResult { ContentType = "application/json", Content = do_NameOfStoredProcedure(parameter) }; } } }

    Read the article

  • Volatile fields in C#

    - by Danny Chen
    From the specification 10.5.3 Volatile fields: The type of a volatile field must be one of the following: A reference-type. The type byte, sbyte, short, ushort, int, uint, char, float, bool, System.IntPtr, or System.UIntPtr. An enum-type having an enum base type of byte, sbyte, short, ushort, int, or uint. First I want to confirm my understanding is correct: I guess the above types can be volatile because they are stored as a 4-bytes unit in memory(for reference types because of its address), which guarantees the read/write operation is atomic. A double/long/etc type can't be volatile because they are not atomic reading/writing since they are more than 4 bytes in memory. Is my understanding correct? And the second, if the first guess is correct, why a user defined struct with only one int field in it(or something similar, 4 bytes is ok) can't be volatile? Theoretically it's atomic right? Or it's not allowed simply because that all user defined structs(which is possibly more than 4 bytes) are not allowed to volatile by design?

    Read the article

  • CSS filters - sometimes working, sometimes not?

    - by a2h
    I'm on the verge of pulling my hair out over this. Here I have a block of perfectly functioning CSS: #admin .block.mode.off { opacity: 0.25; -ms-filter: "progid:DXImageTransform.Microsoft.Alpha(opacity=25)"; filter: progid:DXImageTransform.Microsoft.Alpha(opacity=25); } Meanwhile... Internet Explorer 8 couldn't care less about my filter declarations here: #admin .drop .tabs { margin-bottom: 12px; } #admin .drop .tab { margin-right: 4px; } #admin .drop .tab.off { cursor: pointer; opacity: 0.5; -ms-filter: "progid:DXImageTransform.Microsoft.Alpha(opacity=50)"; filter: progid:DXImageTransform.Microsoft.Alpha(opacity=50); } #admin .drop .tab.off:hover { text-shadow: 0px 0px 4px #fff; } #admin .drop .tab.on { cursor: default; text-shadow: 0px 0px 4px #fff; -ms-filter: "progid:DXImageTransform.Microsoft.Glow(color=#fff, strength=4)"; filter: progid:DXImageTransform.Microsoft.Glow(color=#fff, strength=4); } My document shows in IE8 Standards, and I am assuming the developer tools are a load of tuna, because the functioning block shows up in its CSS tab as: filter: progid:DXImageTransform.Microsoft.Alpha(opacity=25); opacity: 0.25 Does anyone have any ideas?

    Read the article

  • Inode to device information

    - by Methos
    I have 3 questions: I want to figure out if a file belongs to a USB device given the file inode. By looking in the latest kernel sources (2.6.33) on LXR, I think one can find that information by following pointers as follows: inode-super_block-block_device-backing_dev_info-device-device_driver(or device_type). However, the kernel that I am working with - 2.6.22.14 - does not have struct device pointer in the backing_dev_info object. So how can I figure out to which device does a file belong to from just the inode? I see that each of the inode, super_block and block_device contain an object of type 'dev_t'. But even after searching a lot, I could not find out how to convert 'dev_t' into struct device *. Is there any way to get that infomation? I tried to print device major and minor numbers using imajor(inode) and iminor(inode). However, for every file - belonging to hdd or usb - it always prints major and minor number as zero. Why would that be happening? I searched online for USB major numbers and I found out that major number for a USB is 180. However, on multiple machines, it showed me the major number associated with the USB dev as 253. $ ls -ltr /dev/usb* crw-rw---- 1 root root 253, 4 2010-04-13 17:20 /dev/usbmon4 crw-rw---- 1 root root 253, 3 2010-04-13 17:20 /dev/usbmon3 crw-rw---- 1 root root 253, 8 2010-04-13 17:20 /dev/usbmon8 crw-rw---- 1 root root 253, 5 2010-04-13 17:20 /dev/usbmon5 crw-rw---- 1 root root 253, 1 2010-04-13 17:20 /dev/usbmon1 crw-rw---- 1 root root 253, 7 2010-04-13 17:20 /dev/usbmon7 Why is that so?

    Read the article

  • Getting size of a specific byte array from an array of pointers to bytes

    - by Pat James
    In the following example c code, used in an Arduino project, I am looking for the ability to get the size of a specific byte array within an array of pointers to bytes, for example void setup() { Serial.begin(9600); // for debugging byte zero[] = {8, 169, 8, 128, 2,171,145,155,141,177,187,187,2,152,2,8,134,199}; byte one[] = {8, 179, 138, 138, 177 ,2,146, 8, 134, 8, 194,2,1,14,199,7, 145, 8,131, 8,158,8,187,187,191}; byte two[] = {29,7,1,8, 169, 8, 128, 2,171,145,155,141,177,187,187,2,152,2,8,134,199, 2, 2, 8, 179, 138, 138, 177 ,2,146, 8, 134, 8, 194,2,1,14,199,7, 145, 8,131, 8,158,8,187,187,191}; byte* numbers[3] = {zero, one, two }; function(numbers[1], sizeof(numbers[1])/sizeof(byte)); //doesn't work as desired, always passes 2 as the length function(numbers[1], 25); //this works } void loop() { } void function( byte arr[], int len ) { Serial.print("length: "); Serial.println(len); for (int i=0; i<len; i++){ Serial.print("array element "); Serial.print(i); Serial.print(" has value "); Serial.println((int)arr[i]); } } In this code, I understand that sizeof(numbers1)/sizeof(byte)) doesn't work because numbers1 is a pointer and not the byte array value. Is there a way in this example that I can, at runtime, get at the length of a specific (runtime-determined) byte array within an array of pointers to bytes? Understand that I am limited to developing in c (or assembly) for an Arduino environment. Also open to other suggestions rather than the array of pointers to bytes. The overall objective is to organize lists of bytes which can be retrieved, with length, at runtime.

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >