Search Results

Search found 1466 results on 59 pages for 'sizeof'.

Page 6/59 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • iOS Development: Why isn't my try block catching the exception?

    - by BeachRunnerJoe
    Hi. I have an exception being thrown in my code, but when I wrap it in an try/catch block, it doesn't get caught. The NSLog statement is never called in the catch block. Here's the code... NSInteger quoteLength; [data getBytes:(void *)&quoteLength range:NSMakeRange(sizeof(messageType), sizeof(quoteLength))]; @try { //Debugger stack trace shows an NSException being thrown on this statement NSData *stringData = [data subdataWithRange:NSMakeRange(sizeof(messageType) + sizeof(quoteLength), [data length])]; NSString *quote = [[NSString alloc] initWithData:stringData encoding:NSUTF8StringEncoding]; NSLog(@"received quote: %@",quote); [quote release]; } @catch (NSException * e) { NSLog(@"Exception Raised: %@", [e reason]); } Why doesn't it get caught? Thanks so much!

    Read the article

  • Mallocing an unsigned char array to store ints

    - by Max Desmond
    I keep getting a segmentation fault when i test the following code. I am currently unable to find an answer after having searched the web. a = (byte *)malloc(sizeof(byte) * x ) ; for( i = 0 ; i < x-1 ; i++ ) { scanf("%d", &y ) ; a[i] = y ; } Both y and x are initialized. X is the size of the array determined by the user. The segmentation fault is on the second to last integer to be added, i found this by adding printf("roar") ; before setting a[i] to y and entering one number at a time. Byte is a typedef of an unsigned char. Note: I've also tried using a[i] = (byte)y ; A is ininitalized as follows byte *a ; If you need to view the entire code it is this: #include <stdio.h> #include <stdlib.h> #include "sort.h" int p_cmp_f () ; int main( int argc, char *argv[] ) { int x, y, i, choice ; byte *a ; while( choice !=2 ) { printf( "Would you like to sort integers?\n1. Yes\n2. No\n" ) ; scanf("%d", &choice ) ; switch(choice) { case 1: printf( "Enter the length of the array: " ) ; scanf( "%d", &x ) ; a = (byte *)malloc(sizeof( byte ) * x ) ; printf( "Enter %d integers to add to the array: ", x ) ; for( i = 0 ; i < x -1 ; i++ ) { scanf( "%d", &y ) ; a[i] = y ; } switch( choice ) { case 1: bubble_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 2: selection_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x; i++ ) printf( "%d", a[i] ; break ; case 3: insertion_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 4: merge_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; case 5: quick_sort( a, x, sizeof(int), p_cmp_f ) ; for( i = 0 ; i < x ; i++ ) printf( "%d", a[i] ; break ; default: printf("Enter either 1,2,3,4, or 5" ) ; break ; } case 2: printf( "Thank you for using this program\n" ) ; return 0 ; break ; default: printf( "Enter either 1 or 2: " ) ; break ; } } free(a) ; return 0 ; } int p_cmp_f( byte *element1, byte *element2 ) { return *((int *)element1) - *((int *)element2) ; }

    Read the article

  • structure in .NET

    - by redpaladin
    My problem is very simple.. My problem is to send a structure beetween a program in C to a C# program.. I made a struct in C#: public struct NetPoint { public float lat; // 4 bytes public float lon; // 4 bytes public int alt; // 4 bytes public long time; // 8 bytes } Total size of the struct must be 20 bytes When i do a sizeof() of this struct System.Diagnostics.Debug.WriteLine("SizeOf(NetPoint)=" + System.Runtime.InteropServices.Marshal.SizeOf(new NetPoint())); The debug console show : SizeOf(NetPoint)=24 But i expected to have 20 bytes ? Why do I have a difference ? Thank you in advance

    Read the article

  • Different sizes of same strings - Telnet programming

    - by tommyogp
    Hi all, I have been trying to create an iphone app sending telnet command. However what puzzling me is that the sizes of certain strings are so much different, particularly when they include \n or \r. I listed out a few examples. Please assist. const char *a = "play 25\n"; int sizeBitA1 = sizeof(a); // 8 units int sizeBitA2 = sizeof("play 25\n"); // 9 units const char *b = "\r\n"; int sizeBitB1 = sizeof(b); // 8 units int sizeBitB2 = sizeof("\r\n"); // 3 units

    Read the article

  • Help with enum values in registry c++

    - by vBx
    DWORD type = REG_NONE; int i = 0; size = sizeof(ValueName); size2 = sizeof(ValueData); BOOL bContinue = TRUE; do { lRet = RegEnumValue(Hkey , i , ValueName , &size , 0 , &type , ValueData , &size2); switch(lRet) { case ERROR_SUCCESS: print_values(ValueName , type , ValueData , size2); i++; size = sizeof(ValueName); size2 = sizeof(ValueData); break; case ERROR_MORE_DATA: size2 = sizeof(ValueData); if(NULL != ValueData) delete [] ValueData; ValueData = new BYTE[size2]; break; case ERROR_NO_MORE_ITEMS: bContinue = false; break; default: cout << "Unexpected error: " << GetLastError() << endl; bContinue = false; break; } }while(bContinue); it always goes to ERROR_NO_MORE_DATA ,why is that ? :-/

    Read the article

  • The View-Matrix and Alternative Calculations

    - by P. Avery
    I'm working on a radiosity processor in DirectX 9. The process requires that the camera be placed at the center of a mesh face and a 'screenshot' be taken facing 5 different directions...forward...up...down...left...right... ...The problem is that when the mesh face is facing up( look vector: 0, 1, 0 )...a view matrix cannot be determined using standard trigonometry functions: Matrix4 LookAt( Vector3 eye, Vector3 target, Vector3 up ) { // The "look-at" vector. Vector3 zaxis = normal(target - eye); // The "right" vector. Vector3 xaxis = normal(cross(up, zaxis)); // The "up" vector. Vector3 yaxis = cross(zaxis, xaxis); // Create a 4x4 orientation matrix from the right, up, and at vectors Matrix4 orientation = { xaxis.x, yaxis.x, zaxis.x, 0, xaxis.y, yaxis.y, zaxis.y, 0, xaxis.z, yaxis.z, zaxis.z, 0, 0, 0, 0, 1 }; // Create a 4x4 translation matrix by negating the eye position. Matrix4 translation = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -eye.x, -eye.y, -eye.z, 1 }; // Combine the orientation and translation to compute the view matrix return ( translation * orientation ); } The above function comes from http://3dgep.com/?p=1700... ...Is there a mathematical approach to this problem? Edit: A problem occurs when setting the view matrix to up or down directions, here is an example of the problem when facing down: D3DXVECTOR4 vPos( 3, 3, 3, 1 ), vEye( 1.5, 3, 3, 1 ), vLook( 0, -1, 0, 1 ), vRight( 1, 0, 0, 1 ), vUp( 0, 0, 1, 1 ); D3DXMATRIX mV, mP; D3DXMatrixPerspectiveFovLH( &mP, D3DX_PI / 2, 1, 0.5f, 2000.0f ); D3DXMatrixIdentity( &mV ); memcpy( ( void* )&mV._11, ( void* )&vRight, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._21, ( void* )&vUp, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._31, ( void* )&vLook, sizeof( D3DXVECTOR3 ) ); memcpy( ( void* )&mV._41, ( void* )&(-vEye), sizeof( D3DXVECTOR3 ) ); D3DXVec4Transform( &vPos, &vPos, &( mV * mP ) ); Results: vPos = D3DXVECTOR3( 1.5, -6, -0.5, 0 ) - this vertex is not properly processed by shader as the homogenous w value is 0 it cannot be normalized to a position within device space...

    Read the article

  • Per-vertex position/normal and per-index texture coordinate

    - by Boreal
    In my game, I have a mesh with a vertex buffer and index buffer up and running. The vertex buffer stores a Vector3 for the position and a Vector2 for the UV coordinate for each vertex. The index buffer is a list of ushorts. It works well, but I want to be able to use 3 discrete texture coordinates per triangle. I assume I have to create another vertex buffer, but how do I even use it? Here is my vertex/index buffer creation code: // vertices is a Vertex[] // indices is a ushort[] // VertexDefs stores the vertex size (sizeof(float) * 5) // vertex data numVertices = vertices.Length; DataStream data = new DataStream(VertexDefs.size * numVertices, true, true); data.WriteRange<Vertex>(vertices); data.Position = 0; // vertex buffer parameters BufferDescription vbDesc = new BufferDescription() { BindFlags = BindFlags.VertexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = VertexDefs.size * numVertices, StructureByteStride = VertexDefs.size, Usage = ResourceUsage.Default }; // create vertex buffer vertexBuffer = new Buffer(Graphics.device, data, vbDesc); vertexBufferBinding = new VertexBufferBinding(vertexBuffer, VertexDefs.size, 0); data.Dispose(); // index data numIndices = indices.Length; data = new DataStream(sizeof(ushort) * numIndices, true, true); data.WriteRange<ushort>(indices); data.Position = 0; // index buffer parameters BufferDescription ibDesc = new BufferDescription() { BindFlags = BindFlags.IndexBuffer, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None, SizeInBytes = sizeof(ushort) * numIndices, StructureByteStride = sizeof(ushort), Usage = ResourceUsage.Default }; // create index buffer indexBuffer = new Buffer(Graphics.device, data, ibDesc); data.Dispose(); Engine.Log(MessageType.Success, string.Format("Mesh created with {0} vertices and {1} indices", numVertices, numIndices)); And my drawing code: // ShaderEffect, ShaderTechnique, and ShaderPass all store effect data // e is of type ShaderEffect // get the technique ShaderTechnique t; if(!e.techniques.TryGetValue(techniqueName, out t)) return; // effect variables e.SetMatrix("worldView", worldView); e.SetMatrix("projection", projection); e.SetResource("diffuseMap", texture); e.SetSampler("textureSampler", sampler); // set per-mesh/technique settings Graphics.context.InputAssembler.SetVertexBuffers(0, vertexBufferBinding); Graphics.context.InputAssembler.SetIndexBuffer(indexBuffer, SlimDX.DXGI.Format.R16_UInt, 0); Graphics.context.PixelShader.SetSampler(sampler, 0); // render for each pass foreach(ShaderPass p in t.passes) { Graphics.context.InputAssembler.InputLayout = p.layout; p.pass.Apply(Graphics.context); Graphics.context.DrawIndexed(numIndices, 0, 0); } How can I do this?

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • Can I get a C++ Compiler to instantiate objects at compile time

    - by gam3
    I am writing some code that has a very large number of reasonably simple objects and I would like them the be created at compile time. I would think that a compiler would be able to do this, but I have not been able to figure out how. In C I could do the the following: #include <stdio.h> typedef struct data_s { int a; int b; char *c; } info; info list[] = { 1, 2, "a", 3, 4, "b", }; main() { int i; for (i = 0; i < sizeof(list)/sizeof(*list); i++) { printf("%d %s\n", i, list[i].c); } } Using #C++* each object has it constructor called rather than just being layed out in memory. #include <iostream> using std::cout; using std::endl; class Info { const int a; const int b; const char *c; public: Info(const int, const int, const char *); const int get_a() { return a; }; const int get_b() { return b; }; const char *get_c() const { return c; }; }; Info::Info(const int a, const int b, const char *c) : a(a), b(b), c(c) {}; Info list[] = { Info(1, 2, "a"), Info(3, 4, "b"), }; main() { for (int i = 0; i < sizeof(list)/sizeof(*list); i++) { cout << i << " " << list[i].get_c() << endl; } } I just don't see what information is not available for the compiler to completely instantiate these objects at compile time, so I assume I am missing something.

    Read the article

  • Help debugging c fifos code - stack smashing detected - open call not functioning - removing pipes

    - by nunos
    I have three bugs/questions regarding the source code pasted below: stack smashing deteced: In order to compile and not have that error I have addedd the gcc compile flag -fno-stack-protector. However, this should be just a temporary solution, since I would like to find where the cause for this is and correct it. However, I haven't been able to do so. Any clues? For some reason, the last open function call doesn't work and the programs just stops there, without an error, even though the fifo already exists. I want to delete the pipes from the filesystem after before terminating the processes. I have added close and unlink statements at the end, but the fifos are not removed. What am I doing wrong? Thanks very much in advance. P.S.: I am pasting here the whole source file for additional clarity. Just ignore the comments, since they are in my own native language. server.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #define MAX_INPUT_LENGTH 100 #define FIFO_NAME_MAX_LEN 20 #define FIFO_DIR "/tmp/" #define FIFO_NAME_CMD_CLI_TO_SRV "lrc_cmd_cli_to_srv" typedef enum { false, true } bool; bool background = false; char* logfile = NULL; void read_from_fifo(int fd, char** var) { int n_bytes; read(fd, &n_bytes, sizeof(int)); *var = (char *) malloc (n_bytes); read(fd, *var, n_bytes); printf("read %d bytes '%s'\n", n_bytes, *var); } void write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)+1) * sizeof(char); write(fd, &n_bytes, sizeof(int)); //primeiro envia o numero de bytes que a proxima instrucao write ira enviar write(fd, data, n_bytes); printf("writing %d bytes '%s'\n", n_bytes, data); } int main(int argc, char* argv[]) { //CRIA FIFO CMD_CLI_TO_SRV, se ainda nao existir char* fifo_name_cmd_cli_to_srv; fifo_name_cmd_cli_to_srv = (char*) malloc ( (strlen(FIFO_NAME_CMD_CLI_TO_SRV) + strlen(FIFO_DIR) + 1) * sizeof(char) ); strcpy(fifo_name_cmd_cli_to_srv, FIFO_DIR); strcat(fifo_name_cmd_cli_to_srv, FIFO_NAME_CMD_CLI_TO_SRV); int n = mkfifo(fifo_name_cmd_cli_to_srv, 0660); //TODO ver permissoes if (n < 0 && errno != EEXIST) //se houver erro, e nao for por causa de ja haver um com o mesmo nome, termina o programa { fprintf(stderr, "erro ao criar o fifo\n"); fprintf(stderr, "errno: %d\n", errno); exit(4); } //se por acaso já existir, nao cria o fifo e continua o programa normalmente //le informacao enviada pelo cliente, nesta ordem: //1. pid (em formato char*) do processo cliente //2. comando /CONNECT //3. nome de fifo INFO_SRV_TO_CLIXXX //4. nome de fifo MSG_SRV_TO_CLIXXX char* command; char* fifo_name_info_srv_to_cli; char* fifo_name_msg_srv_to_cli; char* client_pid_string; int client_pid; int fd_cmd_cli_to_srv, fd_info_srv_to_cli; fd_cmd_cli_to_srv = open(fifo_name_cmd_cli_to_srv, O_RDONLY); read_from_fifo(fd_cmd_cli_to_srv, &client_pid_string); client_pid = atoi(client_pid_string); read_from_fifo(fd_cmd_cli_to_srv, &command); //recebe commando /CONNECT read_from_fifo(fd_cmd_cli_to_srv, &fifo_name_info_srv_to_cli); //recebe nome de fifo INFO_SRV_TO_CLIXXX read_from_fifo(fd_cmd_cli_to_srv, &fifo_name_msg_srv_to_cli); //recebe nome de fifo MSG_TO_SRV_TO_CLIXXX //CIRA FIFO MSG_CLIXXX_TO_SRV char fifo_name_msg_cli_to_srv[FIFO_NAME_MAX_LEN]; strcpy(fifo_name_msg_cli_to_srv, FIFO_DIR); strcat(fifo_name_msg_cli_to_srv, "lrc_msg_cli"); strcat(fifo_name_msg_cli_to_srv, client_pid_string); strcat(fifo_name_msg_cli_to_srv, "_to_srv"); n = mkfifo(fifo_name_msg_cli_to_srv, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_msg_cli_to_srv); fprintf(stderr, "errno: %d\n", errno); exit(5); } //envia ao cliente a resposta ao commando /CONNECT fd_info_srv_to_cli = open(fifo_name_info_srv_to_cli, O_WRONLY); write_to_fifo(fd_info_srv_to_cli, fifo_name_msg_cli_to_srv); free(logfile); free(fifo_name_cmd_cli_to_srv); close(fd_cmd_cli_to_srv); unlink(fifo_name_cmd_cli_to_srv); unlink(fifo_name_msg_cli_to_srv); unlink(fifo_name_msg_srv_to_cli); unlink(fifo_name_info_srv_to_cli); printf("fim\n"); return 0; } client.c: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #define MAX_INPUT_LENGTH 100 #define PID_BUFFER_LEN 10 #define FIFO_NAME_CMD_CLI_TO_SRV "lrc_cmd_cli_to_srv" #define FIFO_NAME_INFO_SRV_TO_CLI "lrc_info_srv_to_cli" #define FIFO_NAME_MSG_SRV_TO_CLI "lrc_msg_srv_to_cli" #define COMMAND_MAX_LEN 100 #define FIFO_DIR "/tmp/" typedef enum { false, true } bool; char* nickname; char* name; char* email; void write_to_fifo(int fd, char* data) { int n_bytes = (strlen(data)+1) * sizeof(char); write(fd, &n_bytes, sizeof(int)); //primeiro envia o numero de bytes que a proxima instrucao write ira enviar write(fd, data, n_bytes); printf("writing %d bytes '%s'\n", n_bytes, data); } void read_from_fifo(int fd, char** var) { int n_bytes; read(fd, &n_bytes, sizeof(int)); *var = (char *) malloc (n_bytes); printf("read '%s'\n", *var); read(fd, *var, n_bytes); } int main(int argc, char* argv[]) { pid_t pid = getpid(); //CRIA FIFO INFO_SRV_TO_CLIXXX char pid_string[PID_BUFFER_LEN]; sprintf(pid_string, "%d", pid); char* fifo_name_info_srv_to_cli; fifo_name_info_srv_to_cli = (char *) malloc ( (strlen(FIFO_DIR) + strlen(FIFO_NAME_INFO_SRV_TO_CLI) + strlen(pid_string) + 1 ) * sizeof(char) ); strcpy(fifo_name_info_srv_to_cli, FIFO_DIR); strcat(fifo_name_info_srv_to_cli, FIFO_NAME_INFO_SRV_TO_CLI); strcat(fifo_name_info_srv_to_cli, pid_string); int n = mkfifo(fifo_name_info_srv_to_cli, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); exit(6); } int fd_cmd_cli_to_srv, fd_info_srv_to_cli; fd_cmd_cli_to_srv = open("/tmp/lrc_cmd_cli_to_srv", O_WRONLY); char command[COMMAND_MAX_LEN]; printf("> "); scanf("%s", command); while (strcmp(command, "/CONNECT")) { printf("O primeiro comando deverá ser \"/CONNECT\"\n"); printf("> "); scanf("%s", command); } //CRIA FIFO MSG_SRV_TO_CLIXXX char* fifo_name_msg_srv_to_cli; fifo_name_msg_srv_to_cli = (char *) malloc ( (strlen(FIFO_DIR) + strlen(FIFO_NAME_MSG_SRV_TO_CLI) + strlen(pid_string) + 1) * sizeof(char) ); strcpy(fifo_name_msg_srv_to_cli, FIFO_DIR); strcat(fifo_name_msg_srv_to_cli, FIFO_NAME_MSG_SRV_TO_CLI); strcat(fifo_name_msg_srv_to_cli, pid_string); n = mkfifo(fifo_name_msg_srv_to_cli, 0660); if (n < 0) { fprintf(stderr, "error creating %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); exit(7); } // ENVIA COMANDO /CONNECT write_to_fifo(fd_cmd_cli_to_srv, pid_string); //envia pid do processo cliente write_to_fifo(fd_cmd_cli_to_srv, command); //envia commando /CONNECT write_to_fifo(fd_cmd_cli_to_srv, fifo_name_info_srv_to_cli); //envia nome de fifo INFO_SRV_TO_CLIXXX write_to_fifo(fd_cmd_cli_to_srv, fifo_name_msg_srv_to_cli); //envia nome de fifo MSG_TO_SRV_TO_CLIXXX // recebe do servidor a resposta ao comanddo /CONNECT printf("msg1\n"); printf("vamos tentar abrir %s\n", fifo_name_info_srv_to_cli); fd_info_srv_to_cli = open(fifo_name_info_srv_to_cli, O_RDONLY); printf("%s aberto", fifo_name_info_srv_to_cli); if (fd_info_srv_to_cli < 0) { fprintf(stderr, "erro ao criar %s\n", fifo_name_info_srv_to_cli); fprintf(stderr, "errno: %d\n", errno); } printf("msg2\n"); char* fifo_name_msg_cli_to_srv; printf("msg3\n"); read_from_fifo(fd_info_srv_to_cli, &fifo_name_msg_cli_to_srv); printf("msg4\n"); free(nickname); free(name); free(email); free(fifo_name_info_srv_to_cli); free(fifo_name_msg_srv_to_cli); unlink(fifo_name_msg_srv_to_cli); unlink(fifo_name_info_srv_to_cli); printf("fim\n"); return 0; } makefile: CC = gcc CFLAGS = -Wall -lpthread -fno-stack-protector all: client server client: client.c $(CC) $(CFLAGS) client.c -o client server: server.c $(CC) $(CFLAGS) server.c -o server clean: rm -f client server *~

    Read the article

  • Access violation writing location, in my loop

    - by numerical25
    The exact error I am getting is First-chance exception at 0x0096234a in chp2.exe: 0xC0000005: Access violation writing location 0x002b0000. Windows has triggered a breakpoint in chp2.exe. And the breakpoint stops here for(DWORD i = 0; i < m; ++i) { //we are start at the top of z float z = halfDepth - i*dx; for(DWORD j = 0; j < n; ++j) { //to the left of us float x = -halfWidth + j*dx; float y = 0.0f; vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); //<----- Right here vertices[i*n+j].color = D3DXVECTOR4(1.0f, 0.0f, 0.0f, 0.0f); } } I am not sure what I am doing wrong. below is the code in its entirety #include "MyGame.h" //#include "CubeVector.h" /* This code sets a projection and shows a turning cube. What has been added is the project, rotation and a rasterizer to change the rasterization of the cube. The issue that was going on was something with the effect file which was causing the vertices not to be rendered correctly.*/ typedef struct { ID3D10Effect* pEffect; ID3D10EffectTechnique* pTechnique; //vertex information ID3D10Buffer* pVertexBuffer; ID3D10Buffer* pIndicesBuffer; ID3D10InputLayout* pVertexLayout; UINT numVertices; UINT numIndices; }ModelObject; ModelObject modelObject; // World Matrix D3DXMATRIX WorldMatrix; // View Matrix D3DXMATRIX ViewMatrix; // Projection Matrix D3DXMATRIX ProjectionMatrix; ID3D10EffectMatrixVariable* pProjectionMatrixVariable = NULL; //grid information #define NUM_COLS 16 #define NUM_ROWS 16 #define CELL_WIDTH 32 #define CELL_HEIGHT 32 #define NUM_VERTSX (NUM_COLS + 1) #define NUM_VERTSY (NUM_ROWS + 1) bool MyGame::InitDirect3D() { if(!DX3dApp::InitDirect3D()) { return false; } D3D10_RASTERIZER_DESC rastDesc; rastDesc.FillMode = D3D10_FILL_WIREFRAME; rastDesc.CullMode = D3D10_CULL_FRONT; rastDesc.FrontCounterClockwise = true; rastDesc.DepthBias = false; rastDesc.DepthBiasClamp = 0; rastDesc.SlopeScaledDepthBias = 0; rastDesc.DepthClipEnable = false; rastDesc.ScissorEnable = false; rastDesc.MultisampleEnable = false; rastDesc.AntialiasedLineEnable = false; ID3D10RasterizerState *g_pRasterizerState; mpD3DDevice->CreateRasterizerState(&rastDesc, &g_pRasterizerState); mpD3DDevice->RSSetState(g_pRasterizerState); // Set up the World Matrix //The first line of code creates your identity matrix. Second line of code //second combines your camera position, target location, and which way is up respectively D3DXMatrixIdentity(&WorldMatrix); D3DXMatrixLookAtLH(&ViewMatrix, new D3DXVECTOR3(200.0f, 60.0f, -20.0f), new D3DXVECTOR3(200.0f, 50.0f, 0.0f), new D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // Set up the projection matrix D3DXMatrixPerspectiveFovLH(&ProjectionMatrix, (float)D3DX_PI * 0.5f, (float)mWidth/(float)mHeight, 0.1f, 100.0f); if(!CreateObject()) { return false; } return true; } //These are actions that take place after the clearing of the buffer and before the present void MyGame::GameDraw() { static float rotationAngle = 0.0f; // create the rotation matrix using the rotation angle D3DXMatrixRotationY(&WorldMatrix, rotationAngle); rotationAngle += (float)D3DX_PI * 0.0f; // Set the input layout mpD3DDevice->IASetInputLayout(modelObject.pVertexLayout); // Set vertex buffer UINT stride = sizeof(VertexPos); UINT offset = 0; mpD3DDevice->IASetVertexBuffers(0, 1, &modelObject.pVertexBuffer, &stride, &offset); mpD3DDevice->IASetIndexBuffer(modelObject.pIndicesBuffer, DXGI_FORMAT_R32_UINT, 0); // Set primitive topology mpD3DDevice->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // Combine and send the final matrix to the shader D3DXMATRIX finalMatrix = (WorldMatrix * ViewMatrix * ProjectionMatrix); pProjectionMatrixVariable->SetMatrix((float*)&finalMatrix); // make sure modelObject is valid // Render a model object D3D10_TECHNIQUE_DESC techniqueDescription; modelObject.pTechnique->GetDesc(&techniqueDescription); // Loop through the technique passes for(UINT p=0; p < techniqueDescription.Passes; ++p) { modelObject.pTechnique->GetPassByIndex(p)->Apply(0); // draw the cube using all 36 vertices and 12 triangles mpD3DDevice->DrawIndexed(modelObject.numIndices,0,0); } } //Render actually incapsulates Gamedraw, so you can call data before you actually clear the buffer or after you //present data void MyGame::Render() { DX3dApp::Render(); } bool MyGame::CreateObject() { //dx will represent the width and the height of the spacing of each vector float dx = 1; //Below are the number of vertices //m is the vertices of each row. n is the columns DWORD m = 30; DWORD n = 30; //This get the width of the entire land //30 - 1 = 29 rows * 1 = 29 * 0.5 = 14.5 float halfWidth = (n-1)*dx*0.5f; float halfDepth = (m-1)*dx*0.5f; float vertexsize = m * n; VertexPos vertices[80]; for(DWORD i = 0; i < m; ++i) { //we are start at the top of z float z = halfDepth - i*dx; for(DWORD j = 0; j < n; ++j) { //to the left of us float x = -halfWidth + j*dx; float y = 0.0f; vertices[i*n+j].pos = D3DXVECTOR3(x, y, z); vertices[i*n+j].color = D3DXVECTOR4(1.0f, 0.0f, 0.0f, 0.0f); } } int k = 0; DWORD indices[540]; for(DWORD i = 0; i < n-1; ++i) { for(DWORD j = 0; j < n-1; ++j) { indices[k] = (i * n) + j; indices[k + 1] = (i * n) + j + 1; indices[k + 2] = (i + 1) * n + j; indices[k + 3] = (i + 1) * n + j; indices[k + 4] = (i * n) + j + 1; indices[k + 5] = (i + 1) * n + j+ 1; k += 6; } } //Create Layout D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = (sizeof(layout)/sizeof(layout[0])); modelObject.numVertices = sizeof(vertices)/sizeof(VertexPos); //Create buffer desc D3D10_BUFFER_DESC bufferDesc; bufferDesc.Usage = D3D10_USAGE_DEFAULT; bufferDesc.ByteWidth = sizeof(VertexPos) * modelObject.numVertices; bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER; bufferDesc.CPUAccessFlags = 0; bufferDesc.MiscFlags = 0; D3D10_SUBRESOURCE_DATA initData; initData.pSysMem = vertices; //Create the buffer HRESULT hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pVertexBuffer); if(FAILED(hr)) return false; modelObject.numIndices = sizeof(indices)/sizeof(DWORD); bufferDesc.ByteWidth = sizeof(DWORD) * modelObject.numIndices; bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER; initData.pSysMem = indices; hr = mpD3DDevice->CreateBuffer(&bufferDesc, &initData, &modelObject.pIndicesBuffer); if(FAILED(hr)) return false; ///////////////////////////////////////////////////////////////////////////// //Set up fx files LPCWSTR effectFilename = L"effect.fx"; modelObject.pEffect = NULL; hr = D3DX10CreateEffectFromFile(effectFilename, NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, mpD3DDevice, NULL, NULL, &modelObject.pEffect, NULL, NULL); if(FAILED(hr)) return false; pProjectionMatrixVariable = modelObject.pEffect->GetVariableByName("Projection")->AsMatrix(); //Dont sweat the technique. Get it! LPCSTR effectTechniqueName = "Render"; modelObject.pTechnique = modelObject.pEffect->GetTechniqueByName(effectTechniqueName); if(modelObject.pTechnique == NULL) return false; //Create Vertex layout D3D10_PASS_DESC passDesc; modelObject.pTechnique->GetPassByIndex(0)->GetDesc(&passDesc); hr = mpD3DDevice->CreateInputLayout(layout, numElements, passDesc.pIAInputSignature, passDesc.IAInputSignatureSize, &modelObject.pVertexLayout); if(FAILED(hr)) return false; return true; }

    Read the article

  • texture mapping with lib3ds and SOIL help

    - by Adam West
    I'm having trouble with my project for loading a texture map onto a model. Any insight into what is going wrong with my code is fantastic. Right now the code only renders a teapot which I have assinged after creating it in 3DS Max. 3dsloader.cpp #include "3dsloader.h" Object::Object(std:: string filename) { m_TotalFaces = 0; m_model = lib3ds_file_load(filename.c_str()); // If loading the model failed, we throw an exception if(!m_model) { throw strcat("Unable to load ", filename.c_str()); } // set properties of texture coordinate generation for both x and y coordinates glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR); // if not already enabled, enable texture generation if(! glIsEnabled(GL_TEXTURE_GEN_S)) glEnable(GL_TEXTURE_GEN_S); if(! glIsEnabled(GL_TEXTURE_GEN_T)) glEnable(GL_TEXTURE_GEN_T); } Object::~Object() { if(m_model) // if the file isn't freed yet lib3ds_file_free(m_model); //free up memory glDisable(GL_TEXTURE_GEN_S); glDisable(GL_TEXTURE_GEN_T); } void Object::GetFaces() { m_TotalFaces = 0; Lib3dsMesh * mesh; // Loop through every mesh. for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { // Add the number of faces this mesh has to the total number of faces. m_TotalFaces += mesh->faces; } } void Object::CreateVBO() { assert(m_model != NULL); // Calculate the number of faces we have in total GetFaces(); // Allocate memory for our vertices and normals Lib3dsVector * vertices = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsVector * normals = new Lib3dsVector[m_TotalFaces * 3]; Lib3dsTexel* texCoords = new Lib3dsTexel[m_TotalFaces * 3]; Lib3dsMesh * mesh; unsigned int FinishedFaces = 0; // Loop through all the meshes for(mesh = m_model->meshes;mesh != NULL;mesh = mesh->next) { lib3ds_mesh_calculate_normals(mesh, &normals[FinishedFaces*3]); // Loop through every face for(unsigned int cur_face = 0; cur_face < mesh->faces;cur_face++) { Lib3dsFace * face = &mesh->faceL[cur_face]; for(unsigned int i = 0;i < 3;i++) { memcpy(&texCoords[FinishedFaces*3 + i], mesh->texelL[face->points[ i ]], sizeof(Lib3dsTexel)); memcpy(&vertices[FinishedFaces*3 + i], mesh->pointL[face->points[ i ]].pos, sizeof(Lib3dsVector)); } FinishedFaces++; } } // Generate a Vertex Buffer Object and store it with our vertices glGenBuffers(1, &m_VertexVBO); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, vertices, GL_STATIC_DRAW); // Generate another Vertex Buffer Object and store the normals in it glGenBuffers(1, &m_NormalVBO); glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsVector) * 3 * m_TotalFaces, normals, GL_STATIC_DRAW); // Generate a third VBO and store the texture coordinates in it. glGenBuffers(1, &m_TexCoordVBO); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glBufferData(GL_ARRAY_BUFFER, sizeof(Lib3dsTexel) * 3 * m_TotalFaces, texCoords, GL_STATIC_DRAW); // Clean up our allocated memory delete vertices; delete normals; delete texCoords; // We no longer need lib3ds lib3ds_file_free(m_model); m_model = NULL; } void Object::applyTexture(const char*texfilename) { float imageWidth; float imageHeight; glGenTextures(1, & textureObject); // allocate memory for one texture textureObject = SOIL_load_OGL_texture(texfilename,SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glBindTexture(GL_TEXTURE_2D, textureObject); // use our newest texture glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&imageWidth); glGetTexLevelParameterfv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&imageHeight); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // give the best result for texture magnification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); //give the best result for texture minification glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // don't repeat texture glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat textureglTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); // don't repeat texture glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE); glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,imageWidth,imageHeight,0,GL_RGB,GL_UNSIGNED_BYTE,& textureObject); } void Object::Draw() const { // Enable vertex, normal and texture-coordinate arrays. glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); // Bind the VBO with the normals. glBindBuffer(GL_ARRAY_BUFFER, m_NormalVBO); // The pointer for the normals is NULL which means that OpenGL will use the currently bound VBO. glNormalPointer(GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_TexCoordVBO); glTexCoordPointer(2, GL_FLOAT, 0, NULL); glBindBuffer(GL_ARRAY_BUFFER, m_VertexVBO); glVertexPointer(3, GL_FLOAT, 0, NULL); // Render the triangles. glDrawArrays(GL_TRIANGLES, 0, m_TotalFaces * 3); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } 3dsloader.h #include "main.h" #include "lib3ds/file.h" #include "lib3ds/mesh.h" #include "lib3ds/material.h" class Object { public: Object(std:: string filename); virtual ~Object(); virtual void Draw() const; virtual void CreateVBO(); void applyTexture(const char*texfilename); protected: void GetFaces(); unsigned int m_TotalFaces; Lib3dsFile * m_model; Lib3dsMesh* Mesh; GLuint textureObject; GLuint m_VertexVBO, m_NormalVBO, m_TexCoordVBO; }; Called in the main cpp file with: VBO,apply texture and draw (pretty simple, how ironic) and thats it, please help me forum :)

    Read the article

  • Strange problem with Random Access Filing in C++

    - by sam
    This is a simple random access filing program . The problem arises where i want to write data randomly. If I write any where in the file the previous records are set to 0. the last 1 which is being entered currently holds the correct value all others =0. This is the code #include <iostream> #include<fstream> #include<string> using namespace std; class name { int id; int pass; public: void writeBlank(); void writedata(); void readdata(); void readall(); int getid() { return id; } int getpass() { return pass; } void setid(int i) { id=i; } void setpass(int p) { pass=p; } }; void name::writeBlank() { name person; person.setid(0); person.setpass(0); int i; ofstream out("pass.txt",ios::binary); if ( !out ) { cout << "File could not be opened." << endl; } for(i=0;i<10;i++) //make 10 records { cout<<"Put pointer is at: "<<out.tellp()<<endl; cout<<"Blank record "<<i<<" is: "<<person.getid()<<" "<<person.getpass()<<" and size: "<<sizeof(person)<<endl; cout<<"Put pointer is at: "<<out.tellp()<<endl; out.write(reinterpret_cast< const char * >(&person),sizeof(name)); } } void name::writedata() { ofstream out("pass.txt",ios::binary|ios::out); name n1; int iD,p; cout<<"ID?"; cin>>iD; n1.setid(iD); cout<<"Enter password"; cin>>p; n1.setpass(p); if (!out ) { cout << "File could not be opened." << endl; } out.seekp((n1.getid()-1)*sizeof(name),ios::beg); //pointer moves to desired location where we have to store password according to its ID(index) cout<<"File pointer is at: "<<out.tellp()<<endl; out.write(reinterpret_cast<const char*> (&n1), sizeof(name)); //write on that pointed location } void name::readall() { name n1; ifstream in("pass.txt",ios::binary); if ( !in ) { cout << "File could not be opened." << endl; } in.read( reinterpret_cast<char *>(&n1), sizeof(name) ); while ( !in.eof() ) { // display record cout<<endl<<"password at this index is:"<<n1.getpass()<<endl; cout<<"File pointer is at: "<<in.tellg()<<endl; // read next from file in.read( reinterpret_cast< char * >(&n1), sizeof(name)); } // end while } void name::readdata() { ifstream in("pass.txt",ios::binary); if ( !in ) { cout << "File could not be opened." << endl; } in.seekg((getid()-1)*sizeof(name)); //pointer moves to desired location where we have to read password according to its ID(index) cout<<"File pointer is at: "<<in.tellg()<<endl; in.read((char* )this,sizeof(name)); //reads from that pointed location cout<<endl<<"password at this index is:"<<getpass()<<endl; } int main() { name n1; cout<<"Enter 0 to write blank records"<<endl; cout<<"Enter 1 for new account"<<endl; cout<<"Enter 2 to login"<<endl; cout<<"Enter 3 to read all"<<endl; cout<<"Enter 9 to exit"<<endl; int option; cin>>option; while(option==0 || option==1 || option==2 || option==3) { if (option == 0) n1.writeBlank(); if(option==1) { /*int iD,p; cout<<"ID?"; cin>>iD; n1.setid(iD); cout<<"Enter password"; cin>>p; n1.setpass(p);*/ n1.writedata(); } int ind; if(option==2) { cout<<"Index?"; cin>>ind; n1.setid(ind); n1.readdata(); } if(option == 3) n1.readall(); cout<<"Enter 0 to write blank records"<<endl; cout<<"Enter 1 for new account"<<endl; cout<<"Enter 2 to login"<<endl; cout<<"Enter 3 to read all"<<endl; cout<<"Enter 9 to exit"<<endl; cin>>option; } } I Cant understand Y the previous records turn 0.

    Read the article

  • Strange things appear on running the program

    - by FILIaS
    Hey! I'm fixing a program but I'm facing a problem and I cant really realize what's the wrong on the code. I would appreciate any help. I didnt post all the code...but i think with this part you can get an idea of it. With the following function enter() I wanna add user commands' datas to a list. eg. user give the command: "enter james bond 007 gun" 'james' is supposed to be the name, 'bond' the surname, 007 the amount and the rest is the description. I use strtok in order to 'cut' the command,then i put each name on a temp array. Then i call InsertSort in order to put the datas on a linked list but in alphabetical order depending on the surname that users give. I wanna keep the list on order and put each time the elements on the right position. /* struct for all the datas that user enters on file*/ typedef struct catalog { char short_name[50]; char surname[50]; signed int amount; char description[1000]; struct catalog *next; }catalog,*catalogPointer; catalogPointer current; catalogPointer head = NULL; void enter(void)//user command: enter <name> <surname> <amount> <description> { int n,j=2,k=0; char temp[1500]; char command[1500]; while (command[j]!=' ' && command[j]!='\0') { temp[k]=command[j]; j++; k++; } temp[k]='\0'; char *curToken = strtok(temp," "); printf("temp is:%s \n",temp); char short_name[50],surname[50],description[1000]; signed int amount; //short_name=(char *)malloc(sizeof (char *)); //surname=(char *)malloc(sizeof (char *)); //description=(char *)malloc(sizeof (char *)); //amount=(int *)malloc(sizeof (int *)); printf("\nWhat you entered for saving:\n"); for (n = 0; curToken !='\0'; ++n) { if (curToken) { strncpy(short_name, curToken, sizeof (char *)); / } printf("Short Name: %s \n",short_name); curToken = strtok(NULL," "); if (curToken) strncpy(surname, curToken, sizeof (char *)); / printf("SurName: %s \n",surname); curToken = strtok(NULL," "); if (curToken) { char *chk; amount = (int) strtol(curToken, &chk, 10); if (!isspace(*chk) && *chk != 0) fprintf(stderr,"Warning: expected integer value for amount, received %s instead\n",curToken); } printf("Amount: %d \n",amount); curToken = strtok(NULL,"\0"); if (curToken) { strncpy(description, curToken, sizeof (char *)); } printf("Description: %s \n",description); break; } if (findEntryExists(head, surname) != NULL) printf("\nAn entry for <%s %s> is already in the catalog!\nNew entry not entered.\n",short_name,surname); else { printf("\nTry to entry <%s %s %d %s> in the catalog list!\n",short_name,surname,amount,description); InsertSort(&head,short_name, surname, amount, description); printf("\n**Entry done!**\n"); } // Maintain the list in alphabetical order by surname. } /********Uses special case code for the head end********/ void SortedInsert(catalog** headRef, catalogPointer newNode,char short_name[],char surname[],signed int amount,char description[]) { strcpy(newNode->short_name, short_name); strcpy(newNode->surname, surname); newNode->amount=amount; strcpy(newNode->description, description); // Special case for the head end if (*headRef == NULL||(*headRef)->surname >= newNode->surname) { newNode->next = *headRef; *headRef = newNode; } else { // Locate the node before the point of insertion catalogPointer current = *headRef; catalogPointer temp=current->next; while ( temp!=NULL ) { if(strcmp(temp->surname,newNode->surname)<0 ) current = temp; } newNode->next = temp; temp = newNode; } } // Given a list, change it to be in sorted order (using SortedInsert()). void InsertSort(catalog** headRef,char short_name[],char surname[],signed int amount,char description[]) { catalogPointer result = NULL; // build the answer here catalogPointer current = *headRef; // iterate over the original list catalogPointer next; while (current!=NULL) { next = current->next; // tricky - note the next pointer before we change it SortedInsert(&result,current,short_name,surname,amount,description); current = next; } *headRef = result; } Running the program I get these strange things (garbage?)... Choose your selection: enter james bond 007 gun Your command is: enter james bond 007 gun temp is:james What you entered for saving: Short Name: james SurName: Amount: 0 Description: 0T?? Try to entry james 0 0T?? in the catalog list! Entry done! Also I'm facing a problem on how to use the 'malloc' on this program. Thanks in advance. . .

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • Uploading image to flicker in c++

    - by Alien01
    I am creating an application in VC++ using win32,wininet to upload an image to Flickr.I am able to get Frob,Token correctly but when I try to upload the image I am getting error Post size too large. Headers are created as follows wstring wstrAddHeaders = L"Content-Type: multipart/form-data;boundary=ABCD\r\n"; wstrAddHeaders += L"Host: api.flickr.com\r\n"; wchar_t tempStr[MAX_PATH]; wsprintf(L"Content-Length: %ld\r\n",szTotalSize); wstrAddHeaders += tmpStr; wstrAddHeaders +=L"\r\n"; HINTERNET hSession = InternetConnect(hInternet, L"www.flickr.com", INTERNET_DEFAULT_HTTP_PORT, NULL,NULL, INTERNET_SERVICE_HTTP, 0, 0); if(hSession==NULL) { dwErr = GetLastError(); return; } Content of Post request are created as follows: wstring wstrBoundry = L"--ABCD\r\n"; wstring wstrContent =wstrBoundry; wstrContent +=L"Content-Disposition: form-data; name=\"api_key\"\r\n\r\n"; wstrContent +=wstrAPIKey.c_str() ; wstrContent += L"\r\n"; wstrContent +=wstrBoundry; wstrContent +=L"Content-Disposition: form-data; name=\"auth_token\"\r\n\r\n"; wstrContent +=m_wstrToken.c_str(); wstrContent += L"\r\n"; wstrContent +=wstrBoundry; wstrContent +=L"Content-Disposition: form-data; name=\"api_sig\"\r\n\r\n"; wstrContent +=wstrSig; wstrContent += L"\r\n"; wstrContent +=wstrBoundry; wstrContent +=L"Content-Disposition: form-data; name=\"photo\"; filename=\"C:\\test.jpg\""; wstrContent +=L"\r\n"; wstrContent +=L"Content-Type: image/jpeg\r\n\r\n"; wstring wstrFilePath(L"C:\\test.jpg"); CAtlFile file; HRESULT hr = S_OK; hr = file.Create(wstrFilePath.c_str(),GENERIC_READ,FILE_SHARE_READ,OPEN_EXISTING); if(FAILED(hr)) { return; } ULONGLONG nLen; hr = file.GetSize(nLen); if (nLen > (DWORD)-1) { return ; } char * fileBuf = new char[nLen]; file.Read(fileBuf,nLen); wstring wstrLastLine(L"\r\n--ABCD--\r\n"); size_t szTotalSize = sizeof(wchar_t) * (wstrContent.length()) +sizeof(wchar_t) * (wstrLastLine.length()) + nLen; unsigned char *buffer = (unsigned char *)malloc(szTotalSize); memset(buffer,0,szTotalSize); memcpy(buffer,wstrContent.c_str(),wstrContent.length() * sizeof(wchar_t)); memcpy(buffer+wstrContent.length() * sizeof(wchar_t),fileBuf,nLen); memcpy(buffer+wstrContent.length() * sizeof(wchar_t)+nLen,wstrLastLine.c_str(),wstrLastLine.length() * sizeof(wchar_t)); hRequest = HttpOpenRequest(hSession, L"POST", L"/services/upload/", L"HTTP/1.1", NULL, NULL, 0, NULL); if(hRequest) { bRet = HttpAddRequestHeaders(hRequest,wstrAddHeaders.c_str(),wstrAddHeaders.length(),HTTP_ADDREQ_FLAG_ADD | HTTP_ADDREQ_FLAG_REPLACE); if(bRet) { bRet = HttpSendRequest(hRequest,NULL,0,(void *)buffer,szTotalSize); if(bRet) { while(true) { char buffer[1024]={0}; DWORD read=0; BOOL r = InternetReadFile(hRequest,buffer,1024,&read); if(read !=0) { wstring strUploadXML =buffer; break; } } } } I am not pretty sure the way I am adding image data to the string and posting the request. Do I need to convert image data into Unicode? Any suggestions , if someone can find what I am doing wrong that would be very helpful to me.

    Read the article

  • Basic C question, concerning memory allocation and value assignment

    - by VHristov
    Hi there, I have recently started working on my master thesis in C that I haven't used in quite a long time. Being used to Java, I'm now facing all kinds of problems all the time. I hope someone can help me with the following one, since I've been struggling with it for the past two days. So I have a really basic model of a database: tables, tuples, attributes and I'm trying to load some data into this structure. Following are the definitions: typedef struct attribute { int type; char * name; void * value; } attribute; typedef struct tuple { int tuple_id; int attribute_count; attribute * attributes; } tuple; typedef struct table { char * name; int row_count; tuple * tuples; } table; Data is coming from a file with inserts (generated for the Wisconsin benchmark), which I'm parsing. I have only integer or string values. A sample row would look like: insert into table values (9205, 541, 1, 1, 5, 5, 5, 5, 0, 1, 9205, 10, 11, 'HHHHHHH', 'HHHHHHH', 'HHHHHHH'); I've "managed" to load and parse the data and also to assign it. However, the assignment bit is buggy, since all values point to the same memory location, i.e. all rows look identical after I've loaded the data. Here is what I do: char value[10]; // assuming no value is longer than 10 chars int i, j, k; table * data = (table*) malloc(sizeof(data)); data->name = "table"; data->row_count = number_of_lines; data->tuples = (tuple*) malloc(number_of_lines*sizeof(tuple)); tuple* current_tuple; for(i=0; i<number_of_lines; i++) { current_tuple = &data->tuples[i]; current_tuple->tuple_id = i; current_tuple->attribute_count = 16; // static in our system current_tuple->attributes = (attribute*) malloc(16*sizeof(attribute)); for(k = 0; k < 16; k++) { current_tuple->attributes[k].name = attribute_names[k]; // for int values: current_tuple->attributes[k].type = DB_ATT_TYPE_INT; // write data into value-field int v = atoi(value); current_tuple->attributes[k].value = &v; // for string values: current_tuple->attributes[k].type = DB_ATT_TYPE_STRING; current_tuple->attributes[k].value = value; } // ... } While I am perfectly aware, why this is not working, I can't figure out how to get it working. I've tried following things, none of which worked: memcpy(current_tuple->attributes[k].value, &v, sizeof(int)); This results in a bad access error. Same for the following code (since I'm not quite sure which one would be the correct usage): memcpy(current_tuple->attributes[k].value, &v, 1); Not even sure if memcpy is what I need here... Also I've tried allocating memory, by doing something like: current_tuple->attributes[k].value = (int *) malloc(sizeof(int)); only to get "malloc: * error for object 0x100108e98: incorrect checksum for freed object - object was probably modified after being freed." As far as I understand this error, memory has already been allocated for this object, but I don't see where this happened. Doesn't the malloc(sizeof(attribute)) only allocate the memory needed to store an integer and two pointers (i.e. not the memory those pointers point to)? Any help would be greatly appreciated! Regards, Vassil

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • SRV from UAV on the same texture in directx

    - by notabene
    I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV-SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &textureDesc, sizeof( textureDesc ) ); textureDesc.Width = xr; textureDesc.Height = yr; textureDesc.MipLevels = 1; textureDesc.ArraySize = 1; textureDesc.SampleDesc.Count = 1; textureDesc.SampleDesc.Quality = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D->CreateTexture2D( &textureDesc, NULL, &pTexture ); D3D11_UNORDERED_ACCESS_VIEW_DESC viewDescUAV; ZeroMemory( &viewDescUAV, sizeof( viewDescUAV ) ); viewDescUAV.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; viewDescUAV.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; viewDescUAV.Texture2D.MipSlice = 0; D3DD->CreateUnorderedAccessView( pTexture, &viewDescUAV, &pTextureUAV ); //the getSRV function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; D3DD->CreateShaderResourceView( pTexture, &srvDesc, &pTextureSRV);

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • Free memory outside function [migrated]

    - by Dev Bag
    Can you please help with this issue, is the below gonna leak memory or is it ok? and please let me know if there is something else that I need to pay attention to typedef struct { int len; UC * message; }pack; pack * prepare_packet_to_send(const int length,const unsigned char tag,const int numargs, ... ) { pack *layer= malloc(sizeof(pack)); va_list listp; va_start( listp, numargs ); int step = 0; layer->message = (unsigned char *) malloc(length); layer->len = length; int i = 0; int len = 0; unsigned char *source_message ; for( i = 0 ; i < numargs; i++ ) { source_message = va_arg( listp, unsigned char *); len = va_arg( listp, long); memcpy(layer->message+step, source_message, (long) len); step+=len; } va_end( listp ); return layer; } main() { pack *test = call prepare_packet_to_send(sizeof(var1)+sizeof(var2),any tag,any args) // are following two frees correct/enough? or is there something else i need to do to prevent mem leak? free(test->message); free(test); }

    Read the article

  • How to update a user created Bitmap in the Windows API

    - by gamernb
    In my code I quickly generate images on the fly, and I want to display them as quickly as possible. So the first time I create my image, I create a new BITMAP, but instead of deleting the old one and creating a new one for every subsequent image, I just want to copy my data back into the existing one. Here is my code to do both the initial creation and the updating. The creation works just fine, but the updating one doesn't work. BITMAPINFO bi; HBITMAP Frame::CreateBitmap(HWND hwnd, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bi.bmiHeader.biWidth = width; bi.bmiHeader.biHeight = height; bi.bmiHeader.biPlanes = 1; bi.bmiHeader.biBitCount = 24; bi.bmiHeader.biCompression = BI_RGB; ZeroMemory(bi.bmiColors, sizeof(RGBQUAD)); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); HBITMAP bitmap = CreateDIBitmap(GetDC(hwnd), &bi.bmiHeader, CBM_INIT, newPixels, &bi, DIB_RGB_COLORS); delete newPixels; return bitmap; } and void Frame::UpdateBitmap(HWND hwnd, HBITMAP bitmap, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); HDC hdc = GetDC(hwnd); if(!GetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, NULL, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't get base image info!", "Error!", MB_ICONEXCLAMATION | MB_OK); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); // Push back to windows if(!SetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, newPixels, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't set pixel data!", "Error!", MB_ICONEXCLAMATION | MB_OK); delete newPixels; } where the Pixel struct is just this: struct Pixel { unsigned char b, g, r; }; Why does my update function not work. I always get the MessageBox for "Can't set pixel data!" I used code similar to this when I was loading in the original bitmap from file, then editing the data, but now when I manually create it, it doesn't work.

    Read the article

  • Sending double quote character to CreateProcess?

    - by karikari
    I want to send the double quote character to my CreateProcess function. How can I do the correct way? I want to send all of this characters: "%h" CreateProcess(L"C:\\identify -format ",L"\"%h\" trustedsnapshot.png",0,0,TRUE,NORMAL_PRIORITY_CLASS|CREATE_NO_WINDOW,0,0,&sInfo,&pInfo); Here is the full code: int ExecuteExternalFile() { SECURITY_ATTRIBUTES secattr; ZeroMemory(&secattr,sizeof(secattr)); secattr.nLength = sizeof(secattr); secattr.bInheritHandle = TRUE; HANDLE rPipe, wPipe; //Create pipes to write and read data CreatePipe(&rPipe,&wPipe,&secattr,0); STARTUPINFO sInfo; ZeroMemory(&sInfo,sizeof(sInfo)); PROCESS_INFORMATION pInfo; ZeroMemory(&pInfo,sizeof(pInfo)); sInfo.cb=sizeof(sInfo); sInfo.dwFlags=STARTF_USESTDHANDLES; sInfo.hStdInput=NULL; sInfo.hStdOutput=wPipe; sInfo.hStdError=wPipe; CreateProcess(L"C:\\identify",L" -format \"%h\" trustedsnapshot.png",0,0,TRUE,NORMAL_PRIORITY_CLASS|CREATE_NO_WINDOW,0,0,&sInfo,&pInfo); CloseHandle(wPipe); char buf[100]; DWORD reDword; CString m_csOutput,csTemp; BOOL res; do { res=::ReadFile(rPipe,buf,100,&reDword,0); csTemp=buf; m_csOutput+=csTemp.Left(reDword); }while(res); //return m_csOutput; float fvar; //fvar = atof((const char *)(LPCTSTR)(m_csOutput)); ori //fvar=atof((LPCTSTR)m_csOutput); fvar = _tstof(m_csOutput); const size_t len = 256; wchar_t buffer[len] = {}; _snwprintf(buffer, len - 1, L"%d", fvar); MessageBox(NULL, buffer, L"test print createprocess value", MB_OK); return fvar; } I need this function to return the integer value from the CreateProcess.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >