Search Results

Search found 27258 results on 1091 pages for 'installed programs'.

Page 988/1091 | < Previous Page | 984 985 986 987 988 989 990 991 992 993 994 995  | Next Page >

  • Autoconf -- including a static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • iPhone memory management

    - by Prazi
    I am newbie to iPhone programming. I am not using Interface Builder in my programming. I have some doubt about memory management, @property topics in iPhone. Consider the following code @interface LoadFlag : UIViewController { UIImage *flag; UIImageView *preview; } @property (nonatomic, retain) UIImageView *preview; @property (nonatomic, retain) UIImage *flag; @implementation @synthesize preview; @synthesize flag; - (void)viewDidLoad { flag = [UIImage imageNamed:@"myImage.png"]]; NSLog(@"Preview: %d\n",[preview retainCount]); //Count: 0 but shouldn't it be 1 as I am retaining it in @property in interface file preview=[[UIImageView alloc]init]; NSLog(@"Count: %d\n",[preview retainCount]); //Count: 1 preview.frame=CGRectMake(0.0f, 0.0f, 100.0f, 100.0f); preview.image = flag; [self.view addSubview:preview]; NSLog(@"Count: %d\n",[preview retainCount]); //Count: 2 [preview release]; NSLog(@"Count: %d\n",[preview retainCount]); //Count: 1 } When & Why(what is the need) do I have to set @property with retain (in above case for UIImage & UIImageView) ? I saw this statement in many sample programs but didn't understood the need of it. When I declare @property (nonatomic, retain) UIImageView *preview; statement the retain Count is 0. Why doesn't it increase by 1 inspite of retaining it in @property. Also when I declare [self.view addSubview:preview]; then retain Count increments by 1 again. In this case does the "Autorelease pool" releases for us later or we have to take care of releasing it. I am not sure but I think that the Autorelease should handle it as we didn't explicitly retained it so why should we worry of releasing it. Now, after the [preview release]; statement my count is 1. Now I don't need UIImageView anymore in my program so when and where should I release it so that the count becomes 0 and the memory gets deallocated. Again, I am not sure but I think that the Autorelease should handle it as we didn't explicitly retained it so why should we worry of releasing it. What will happen if I release it in -(void) dealloc method In the statement - flag = [UIImage imageNamed:@"myImage.png"]]; I haven't allocated any memory to flag but how can I still use it in my program. In this case if I do not allocate memory then who allocates & deallocates memory to it or is the "flag" just a reference pointing to - [UIImage imageNamed:@"myImage.png"]];. If it is a reference only then do i need to release it. Thanks in advance.

    Read the article

  • iPhone app works hundreds of times, then crashes from memory error on startup, then never works unti

    - by peter
    I have a Cocos2d/openGL iPhone game. It's a universal app and I'm dealing with an occasional but nasty error on the iPad. We are loading a lot of textures up front (3 2048x2048 textures). I'm working on reducing this up front load, but what worries me is I really don't understand the root cause of this crash that permanently breaks the app. This is the deal: 1. App works fine for hundreds of plays on the iPad 2. Eventually (I'm guessing due to other programs using up some memory and not letting go or whatever) the app starts crashing on startup. It just closes again in the middle of loading. 3. The App will now never work again on that iPad, closing immediately every time, until the iPad is restarted. Obviously my app is demanding too much memory up front to work reliably every time, I get that. What I don't get is why when it fails once, it has failed forever until the iPad is restarted. Can anyone explain what is going on here? EDIT: forgot to add organizer crash lags just say low memory, like this every time (I changed my app name to MyAppName below). Again, I know it's low memory, but why does it stay low memory until restart?: Incident Identifier: E7A2507C-3FB1-4E3B-B315-09F094236541 CrashReporter Key: 0fda9d667f2c6073f20a76809aa25438b6854d15 OS Version: iPhone OS 3.2 (7B367) Date: 2010-04-30 16:59:44 -0400 Free pages: 437 Wired pages: 17228 Purgeable pages: 0 Largest process: MyAppName Processes Name UUID Count resident pages MyAppName <6307ce41802850944baa78d29224fa7f> 22385 (jettisoned) (active) mediaserverd <ea8bac28b06fe3980fdd44b5caceb563> 242 DTMobileIS <a0f651e43881e66f50f8a95abea72921> 5826 notification_pro <4c9a7ee0a5bbe160465991228f2d2f2e> 67 syslog_relay <4ceaed776d2df957fa130712f4ef21d0> 66 notification_pro <4c9a7ee0a5bbe160465991228f2d2f2e> 67 notification_pro <4c9a7ee0a5bbe160465991228f2d2f2e> 67 afcd <4f3c9566e33b4463f05603d990584e5d> 72 ptpd <83de0f774bd6553d513ae9e19b0f9b56> 181 syslogd <66247e305d5c0bf6f1ce1cc950653263> 81 lsd <a4d852c1c8da2b3d231bdc90887b52ba> 130 iapd <a8534cbde4b90ad5915dd26ab03ff3e3> 204 notifyd <5e9d5bee7c3eae1c8b494c79eb11406e> 71 BTServer <64e4a6ea6b1240db2331e05a29caa862> 108 CommCenter <97bf297944ac4bde19bcee96dd23bd5f> 181 SpringBoard <c7a5904c12db7b14334a4edaa4cabaa9> 5339 (active) configd <aca9fa3380322669164fd6b1a3864300> 373 fairplayd.K48 <2d997ffca1a568f9c5400ac32d8f0782> 84 locationd <dd1ea88105c62173908ce767db5c4d37> 599 mDNSResponder <820560222d47a1f2a0dce98a7f8a9721> 108 lockdownd <497fd54c79a680bf29f5d9320f514613> 303 MobileStorageMou <c277b79c2157c4dc5cfc5c3ca35bd5f2> 69 launchd <66972eee4d865c4383b33d985d22994b> 98 **End**

    Read the article

  • GLKit Memory Leak copywithZone

    - by TommyT39
    Running the instruments utility against the game I'm writing shows a bunch of memory leaks related to copy with Zone when I cycle through an array and draw some simple cube objects. Im not sure the best way to track this down as I'm new to OpenGL programming. My program is using ARC and is set to build for IOS 5. I am initializing GLKit to use OPenGl 2.0 and using the BafeEffect so I don't have to write my own shaders etc.. This shouldn't be rocket science. Im guessing that I must be not releasing something within the draw function. Below is the code to my draw function. Could you guys take a look and see if anything stands out as the problem? One other thing to note is that I'm using 15 different textures, the cubes can be 1 of 15 different ones. I have a property set on the cube class for the texture and I set it as I create the cube in there array. But I do load all 15 when my programs view did load starts.They are small .jps files that are less than 75k each and each cube uses the same texture all the way around so shouldn't be too big of an issue. Here is the code to my draw function: - (void)draw { GLKMatrix4 xRotationMatrix = GLKMatrix4MakeXRotation(rotation.x); GLKMatrix4 yRotationMatrix = GLKMatrix4MakeYRotation(rotation.y); GLKMatrix4 zRotationMatrix = GLKMatrix4MakeZRotation(rotation.z); GLKMatrix4 scaleMatrix = GLKMatrix4MakeScale(scale.x, scale.y, scale.z); GLKMatrix4 translateMatrix = GLKMatrix4MakeTranslation(position.x, position.y, position.z); GLKMatrix4 modelMatrix = GLKMatrix4Multiply(translateMatrix,GLKMatrix4Multiply(scaleMatrix,GLKMatrix4Multiply(zRotationMatrix, GLKMatrix4Multiply(yRotationMatrix, xRotationMatrix)))); GLKMatrix4 viewMatrix = GLKMatrix4MakeLookAt(0, 0, 1, 0, 0, -5, 0, 1, 0); effect.transform.modelviewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix); effect.transform.projectionMatrix = GLKMatrix4MakePerspective(0.125*M_TAU, 1.0, 2, 0); effect.texture2d0.name = wallTexture.name; [effect prepareToDraw]; glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, triangleVertices); glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, 18); glDisableVertexAttribArray(GLKVertexAttribPosition); glDisableVertexAttribArray(GLKVertexAttribTexCoord0); }

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Converting C source to C++

    - by Barry Kelly
    How would you go about converting a reasonably large (300K), fairly mature C codebase to C++? The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests. I have in mind a general strategy: Compile everything in C++'s C subset and get that working. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation. Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice? Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option. Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability. [For the sake of the question, assume that translation into C++ is mandatory, and that leaving it in C is not an option. The point of adding this condition is to weed out the "leave it in C" answers.]

    Read the article

  • Autoconf -- building with static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • OpenGL Shader Compile Error

    - by Tomas Cokis
    I'm having a bit of a problem with my code for compiling shaders, namely they both register as failed compiles and no log is received. This is the shader compiling code: /* Make the shader */ Uint size; GLchar* file; loadFileRaw(filePath, file, &size); const char * pFile = file; const GLint pSize = size; newCashe.shader = glCreateShader(shaderType); glShaderSource(newCashe.shader, 1, &pFile, &pSize); glCompileShader(newCashe.shader); GLint shaderCompiled; glGetShaderiv(newCashe.shader, GL_COMPILE_STATUS, &shaderCompiled); if(shaderCompiled == GL_FALSE) { ReportFiler->makeReport("ShaderCasher.cpp", "loadShader()", "Shader did not compile", "The shader " + filePath + " failed to compile, reporting the error - " + OpenGLServices::getShaderLog(newCashe.shader)); } And these are the support functions: bool loadFileRaw(string fileName, char* data, Uint* size) { if (fileName != "") { FILE *file = fopen(fileName.c_str(), "rt"); if (file != NULL) { fseek(file, 0, SEEK_END); *size = ftell(file); rewind(file); if (*size > 0) { data = (char*)malloc(sizeof(char) * (*size + 1)); *size = fread(data, sizeof(char), *size, file); data[*size] = '\0'; } fclose(file); } } return data; } string OpenGLServices::getShaderLog(GLuint obj) { int infologLength = 0; int charsWritten = 0; char *infoLog; glGetShaderiv(obj, GL_INFO_LOG_LENGTH,&infologLength); if (infologLength > 0) { infoLog = (char *)malloc(infologLength); glGetShaderInfoLog(obj, infologLength, &charsWritten, infoLog); string log = infoLog; free(infoLog); return log; } return "<Blank Log>"; } and the shaders I'm loading: void main(void) { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } void main(void) { gl_Position = ftransform(); } In short I get From: ShaderCasher.cpp, In: loadShader(), Subject: Shader did not compile Message: The shader Data/Shaders/Standard/standard.vs failed to compile, reporting the error - <Blank Log> for every shader I compile I've tried replacing the file reading with just a hard coded string but I get the same error so there must be something wrong with how I'm compiling them. I have run and compiled example programs with shaders, so I doubt my drivers are the issue, but in any case I'm on a Nvidia 8600m GT. Can anyone help?

    Read the article

  • How to distort the desktop screen

    - by HaifengWang
    Hi friends, I want to change the shape of the desktop screen, so what are displayed on the desktop will be distorted at the same time. And the user can still operate the PC with the mouse on the distorted desktop(Run the applications, Open the "My Computer" and so on). I think I must get the projection matrix of the screen coordinate at first. Then transform the matrix, and map the desktop buffer image to the distorted mesh. Are there any interfaces which can modify the shape of the desktop screen in OpenGL or DirectX? Would you please give me some tip on it. Thank you very much in advance. Please refer to the picture from http://oi53.tinypic.com/bhewdx.jpg BR, Haifeng Addition1: I'm sorry! Maybe I didn't express clearly what I want to implement. What I want to implement is to modify the shape of the screen. So we can distort the shapes of all the applications which are run on Windows at the same time. For example that the window of "My Computer" will be distorted with the distortion of the desktop screen. And we can still operate the PC with mouse from the distorted desktop(Click the shortcut to run a program). Addition2: The projection matrix is just my assume. There isn't any desktop projection matrix by which the desktop surface is projected to the screen. What I want to implement is to change the shape of the desktop, as the same with mapping the desktop to an 3D mesh. But the user can still operate the OS on the distorted desktop(Click the shortcut to run a program, open the ie to surf the internet). Addition3: The shapes of all the programs run on the OS are changed with the distortion of the screen. It's realtime. The user can still operate the OS on the distorted screen as usually. Maybe we can intercept or override the GPU itself to implement the effect. I'm investigating GDI, I think I can find some clue for that. The first step is to find how to show the desktop on the screen.

    Read the article

  • Running daemon through rsh

    - by Max
    I want to run program as daemon in remote machine in Unix. I have rsh connection and I want the program to be running after disconnection. Suppose I have two programs: util.cpp and forker.cpp. util.cpp is some utility, for our purpose let it be just infinite root. util.cpp int main() { while (true) {}; return 0; } forker.cpp takes some program and run it in separe process through fork() and execve(): forker.cpp #include <stdio.h> #include <errno.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char** argv) { if (argc != 2) { printf("./a.out <program_to_fork>\n"); exit(1); } pid_t pid; if ((pid = fork()) < 0) { perror("fork error."); exit(1); } else if (!pid) { // Child. if (execve(argv[1], &(argv[1]), NULL) == -1) { perror("execve error."); exit(1); } } else { // Parent: do nothing. } return 0; } If I run: ./forker util forker is finished very quickly, and bash 'is not paused', and util is running as daemon. But if I run: scp forker remote_server://some_path/ scp program remote_server://some_path/ rsh remote_server 'cd /some_path; ./forker program' then it is all the same (i.e. at the remote_sever forker is finishing quickly, util is running) but my bash in local machine is paused. It is waiting for util stopping (I checked it. If util.cpp is returning than it is ok.), but I don't understand why?! There are two questions: 1) Why is it paused when I run it through rsh? I am sure that I chose some stupid way to run daemon. So 2) How to run some program as daemon in C/C++ in unix-like platforms. Tnx!

    Read the article

  • Parsing string logic issue c#

    - by N0xus
    This is a follow on from this question My program is taking in a string that is comprised of two parts: a distance value and an id number respectively. I've split these up and stored them in local variables inside my program. All of the id numbers are stored in a dictionary and are used check the incoming distance value. Though I should note that each string that gets sent into my program from the device is passed along on a single string. The next time my program receives that a signal from a device, it overrides the previous data that was there before. Should the id key coming into my program match one inside my dictionary, then a variable held next to my dictionaries key, should be updated. However, when I run my program, I don't get 6 different values, I only get the same value and they all update at the same time. This is all the code I have written trying to do this: Dictionary<string, string> myDictonary = new Dictionary<string, string>(); string Value1 = ""; string Value2 = ""; string Value3 = ""; string Value4 = ""; string Value5 = ""; string Value6 = ""; void Start() { myDictonary.Add("11111111", Value1); myDictonary.Add("22222222", Value2); myDictonary.Add("33333333", Value3); myDictonary.Add("44444444", Value4); myDictonary.Add("55555555", Value5); myDictonary.Add("66666666", Value6); } private void AppendString(string message) { testMessage = message; string[] messages = message.Split(','); foreach(string w in messages) { if(!message.StartsWith(" ")) outputContent.text += w + "\n"; } messageCount = "RSSI number " + messages[0]; uuidString = "UUID number " + messages[1]; if(myDictonary.ContainsKey(messages[1])) { Value1 = messageCount; Value2 = messageCount; Value3 = messageCount; Value4 = messageCount; Value5 = messageCount; Value6 = messageCount; } } How can I get it so that when programs recives the first key, for example 1111111, it only updates Value1? The information that comes through can be dynamic, so I'd like to avoid harding as much information as I possibly can.

    Read the article

  • MATLAB arbitrary code execution

    - by aristotle2600
    I am writing an automatic grader program under linux. There are several graders written in MATLAB, so I want to tie them all together and let students run a program to do an assignment, and have them choose the assignment. I am using a C++ main program, which then has mcc-compiled MATLAB libraries linked to it. Specifically, my program reads a config file for the names of the various matlab programs, and other information. It then uses that information to present choices to the student. So, If an assignment changes, is added or removed, then all you have to do is change the config file. The idea is that next, the program invokes the correct matlab library that has been compiled with mcc. But, that means that the libraries have to be recompiled if a grader gets changed. Worse, the whole program must be recompiled if a grader is added or removed. So, I would like one, simple, unchanging matlab library function to call the grader m-files directly. I currently have such a library, that uses eval on a string passed to it from the main program. The problem is that when I do this, apparently, mcc absorbs the grader m-code into itself; changing the grader m code after compilation has no effect. I would like for this not to happen. It was brought to my attention that Mathworks may not want me to be able to do this, since it could bypass matlab entirely. That is not my intention, and I would be happy with a solution that requires a full matlab install. My possible solutions are to use a mex file for the main program, or have the main program call a mcc library, which then calls a mex file, which then calls the proper grader. The reason I am hesitant about the first solution is that I'm not sure how many changes I would have to make to my code to make it work; my code is C++, not C, which I think makes things more complicated. The 2nd solution, though, may just be more complicated and ultimately have the same problem. So, any thoughts on this situation? How should I do this?

    Read the article

  • How to get Augmented Reality: A Practical Guide examples working?

    - by Glen
    I recently bought the book: Augmented Reality: A Practical Guide (http://pragprog.com/titles/cfar/augmented-reality). It has example code that it says runs on Windows, MacOS and Linux. But I can't get the binaries to run. Has anyone got this book and got the binaries to run on ubuntu? I also can't figure out how to compile the examples in Ubuntu. How would I do this? Here is what it says to do: Compiling for Linux Refreshingly, there are no changes required to get the programs in this chapter to compile for Linux, but as with Windows, you’ll first have to find your GL and GLUT files. This may mean you’ll have to download the correct version of GLUT for your machine. You need to link in the GL, GLU, and GLUT libraries and provide a path to the GLUT header file and the files it includes. See whether there is a glut.h file in the /usr/include/GL directory; otherwise, look elsewhere for it—you could use the command find / -name "glut.h" to search your entire machine, or you could use the locate command (locate glut.h). You may need to customize the paths, but here is an example of the compile command: gcc -o opengl_template opengl_template.cpp -I /usr/include/GL -I /usr/include -lGL -lGLU -lglut gcc is a C/C++ compiler that should be present on your Linux or Unix machine. The -I /usr/include/GL command-line argument tells gcc to look in /usr/include/GL for the include files. In this case, you’ll find glut.h and what it includes. When linking in libraries with gcc, you use the -lX switch—where X is the name of your library and there is a correspond- ing libX.a file somewhere in your path. For this example, you want to link in the library files libGL.a, libGLU.a, and libglut.a, so you will use the gcc arguments -lGL -lGLU -lglut. These three files are found in the default directory /usr/lib/, so you don’t need to specify their location as you did with glut.h. If you did need to specify the library path, you would add -L to the path. To run your compiled program, type ./opengl_template or, if the current directory is in your shell’s paths, just opengl_template. When working in Linux, it’s important to know that you may need to keep your texture files to a maximum of 256 by 256 pixels or find the settings in your system to raise this limit. Often an OpenGL program will work in Windows but produce a blank white texture in Linux until the texture size is reduced. The above instructions make no sense to me. Do I have to use gcc to compile or can I use eclipse? If I use either eclipse or gcc what do I need to do to compile and run the program?

    Read the article

  • Is there a programming language with be semantics close to English ?

    - by ivo s
    Most languages allow to 'tweek' to certain extend parts of the syntax (C++,C#) and/or semantics that you will be using in your code (Katahdin, lua). But I have not heard of a language that can just completely define how your code will look like. So isn't there some language which already exists that has such capabilities to override all syntax & define semantics ? Example of what I want to do is basically from the C# code below: foreach(Fruit fruit in Fruits) { if(fruit is Apple) { fruit.Price = fruit.Price/2; } } I want do be able to to write the above code in my perfect language like this: Check if any fruits are Macintosh apples and discount the price by 50%. The advantages that come to my mind looking from a coder's perspective in this "imaginary" language are: It's very clear what is going on (self descriptive) - it's plain English after all even kid would understand my program Hides all complexities which I have to write in C#. But why should I care to learn that if statements, arithmetic operators etc since there are already implemented The disadvantages that I see for a coder who will maintain this program are: Maybe you would express this program differently from me so you may not get all the information that I've expressed in my sentence Programs can be quite verbose and hard to debug but if possible to even proximate this type of syntax above maybe more people would start programming right? That would be amazing I think. I can go to work and just write an essay to draw a square on a winform like this: Create a form called MyGreetingForm. Draw a square with in the middle of MyGreetingFormwith a side of 100 points. In the middle of the square write "Hello! Click here to continue" in Arial font. In the above code the parser must basically guess that I want to use the unnamed square from the previous sentence, it'd be hard to write such a smart parser I guess, yet it's so simple what I want to do. If the user clicks on square in the middle of MyGreetingForm show MyMainForm. In the above code 'basically' the compiler must: 1)generate an event handler 2) check if there is any square in the middle of the form and if there is - 3) hide the form and show another form It looks very hard to do but it doesn't look impossible IMO to me at least approximate this (I can personally generate a parser to perform the 3 steps above np & it's basically the same that it has to do any way when you add even in c# a.MyEvent=+handler; so I don't see a problem here) so I'm thinking maybe somebody already did something like this ? Or is there some practical burden of complexity to create such a 'essay style' programming language which I can't see ? I mean what's the worse that can happen if the parser is not that good? - your program will crash so you have to re-word it:)

    Read the article

  • Getting up to speed on modern architecture

    - by Matt Thrower
    Hi, I don't have any formal qualifications in computer science, rather I taught myself classic ASP back in the days of the dotcom boom and managed to get myself a job and my career developed from there. I was a confident and, I think, pretty good programmer in ASP 3 but as others have observed one of the problems with classic ASP was that it did a very good job of hiding the nitty-gritty of http so you could become quite competent as a programmer on the basis of relatively poor understanding of the technology you were working with. When I changed on to .NET at first I treated it like classic ASP, developing stand-alone applications as individual websites simply because I didn't know any better at the time. I moved jobs at this point and spent the next several years working on a single site whose architecture relied heavily on custom objects: in other words I gained a lot of experience working with .NET as a middle-tier development tool using a quite old-fashioned approach to OO design along the lines of the classic "car" class example that's so often used to teach OO. Breaking down programs into blocks of functionality and basing your classes and methods around that. Although we worked under an Agile approach to manage the work the whole setup was classic client/server stuff. That suited me and I gradually got to grips with .NET and started using it far more in the manner that it should be, and I began to see the power inherent in the technology and precisely why it was so much better than good old ASP 3. In my latest job I have found myself suddenly dropped in at the deep end with two quite young, skilled and very cutting-edge programmers. They've built a site architecture which is modelling along a lot of stuff which is new to me and which, in truth I'm having a lot of trouble understanding. The application is built on a cloud computing model with multi-tenancy and the architecture is all loosely coupled using a lot of interfaces, factories and the like. They use nHibernate a lot too. Shortly after I joined, both these guys left and I'm now supposedly the senior developer on a system whose technology and architecture I don't really understand and I have no-one to ask questions of. Except you, the internet. Frankly I feel like I've been pitched in at the deep end and I'm sinking. I'm not sure if this is because I lack the educational background to understand this stuff, if I'm simply not mathematically minded enough for modern computing (my maths was never great - my approach to design is often to simply debug until it works, then refactor until it looks neat), or whether I've simply been presented with too much of too radical a nature at once. But the only way to find out which it is is to try and learn it. So can anyone suggest some good places to start? Good books, tutorials or blogs? I've found a lot of internet material simply presupposes a level of understanding that I just don't have. Your advice is much appreciated. Help a middle-aged, stuck in the mud developer get enthusastic again! Please!

    Read the article

  • The 80 column limit, still useful?

    - by Tim Post
    Related: While coding, how many columns do you format for? Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? I mostly use C, however this question is language agnostic. Its also subjective, so I'll tag it as such. Many individual projects set their own various coding standards, a guide to adjust your coding style. Many enforce an 80 column limit on code, i.e. don't force a dumb 80 x 25 terminal to wrap your lines in someone else's editor of choice if they are stuck with such a display, don't force them to turn off wrapping. Both private and open source projects usually have some style guidelines. My question is, in this day and age, is that requirement more of a pest than a helper? Does anyone still login via the local console with no framebuffer and actually edit code? If so, how often and why cant you use SSH? I help to manage a few open source projects, I was considering extending this limit to 110 columns, but I wanted to get feedback first. So, any feedback is appreciated. I can see the need to make certain OUTPUT of programs (i.e. a --help /h display) 80 columns or less, but I really don't see the need to force people to break up code under 110 columns long into 2 lines, when its easier to read on one line. I can also see the case for adhering to an 80 column limit if you're writing code that will be used on micro controllers that have to be serviced in the field with a god-knows-what terminal emulator. Beyond that, what are your thoughts? Edit: This is not an exact duplicate. I am asking very specific questions, such as how many people are actually still using such a display. I am also not asking "what is a good column limit", I'm proposing one and hoping to gather feedback. Beyond that, I'm also citing cases where the 80 column limit is still a good idea. I don't want a guide to my own "c-style", I'm hoping to adjust standards for several projects. If the duplicate in question had answered all of my questions, I would not have posted this one :) That will teach me to mention it next time. Edit 2 question |= COMMUNITY_WIKI

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • Java - StackOverflow Error on recursive 2D boolean array method that shouldn't happen.

    - by David W.
    Hey everyone, I'm working on a runnable java applet that has a fill feature much like the fill method in drawing programs such as Microsoft Paint. This is how my filling method works: 1.) The applet gets the color that the user clicked on using .getRGB 2.) The applet creates a 2D boolean array of all the pixels in the window, with the value "true" if that pixel is the same color as the color clicked on or "false" if not. The point of this step is to keep the .getRGB method out of the recursive method to hopefully prevent this error. 3.) The applet recursively searches the 2D array of booleans where the user clicked, recording each adjacent point that is "true" in an ArrayList. The method then changes each point it records to false and continues. 4.) The applet paints every point stored in the ArrayList to a user selected color. All of the above steps work PERFECTLY if the user clicks within a small area, where only a few thousand pixels or so have their color changed. If the user selects a large area however (such as about 360,000 / the size of the applet window), the applet gets to the recursive stage and then outputs this error: Exception in thread "AWT-EventQueue-1" java.lang.StackOverflowError at java.util.ArrayList.add(ArrayList.java:351) at paint.recursiveSearch(paint.java:185) at paint.recursiveSearch(paint.java:190) at paint.recursiveSearch(paint.java:190) at paint.recursiveSearch(paint.java:190) at paint.recursiveSearch(paint.java:190) at paint.recursiveSearch(paint.java:190) at paint.recursiveSearch(paint.java:190) (continues for a few pages) Here is my recursive code: public void recursiveSearch(boolean [][] list, Point p){ if(isValid(p)){ if(list[(int)p.y][(int)p.x]){ fillPoints.add(p); list[(int)p.y][(int)p.x] = false; recursiveSearch(list, new Point(p.x-1,p.y));//Checks to the left recursiveSearch(list, new Point(p.x,p.y-1));//Checks above recursiveSearch(list, new Point(p.x+1,p.y));//Checks to the right recursiveSearch(list, new Point(p.x,p.y+1));//Checks below } } } Is there any way I can work around an error like this? I know that the loop will never go on forever, it just could take a lot of time. Thanks in advance.

    Read the article

  • Please Describe Your Struggles with Minimizing Use of Global Variables

    - by MetaHyperBolic
    Most of the programs I write are relatively flowchartable processes, with a defined start and hoped-for end. The problems themselves can be complex but do not readily lean towards central use of objects and event-driven programming. Often, I am simply churning through great varied batches of text data to produce different text data. Only occasionally do I need to create a class: As an example, to track warnings, errors, and debugging message, I created a class (Problems) with one instantiation (myErr), which I believe to be an example of the Singleton design pattern. As a further factor, my colleagues are more old school (procedural) than I and are unacquainted with object-oriented programming, so I am loath to create things they could not puzzle through. And yet I hear, again and again, how even the Singleton design pattern is really an anti-pattern and ought to be avoided because Global Variables Are Bad. Minor functions need few arguments passed to them and have no need to know of configuration (unchanging) or program state (changing) -- I agree. However, the functions in the middle of the chain, which primarily control program flow, have a need for a large number of configuration variables and some program state variables. I believe passing a dozen or more arguments along to a function is a "solution," but hardly an attractive one. I could, of course, cram variables into a single hash/dict/associative array, but that seems like cheating. For instance, connecting to the Active Directory to make a new account, I need such configuration variables as an administrative username, password, a target OU, some default groups, a domain, etc. I would have to pass those arguments down through a variety of functions which would not even use them, merely shuffle them off down through a chain which would eventually lead to the function that actually needs them. I would at least declare the configuration variables to be constant, to protect them, but my language of choice these days (Python) provides no simple manner to do this, though recipes do exist as workarounds. Numerous Stack Overflow questions have hit on the why? of the badness and the requisite shunning, but do not often mention tips on living with this quasi-religious restriction. How have you resolved, or at least made peace with, the issue of global variables and program state? Where have you made compromises? What have your tricks been, aside from shoving around flocks of arguments to functions?

    Read the article

  • Writing OpenGL enabled GUI

    - by Jaen
    I am exploring a possibility to write a kind of a notebook analogue that would reproduce the look and feel of using a traditional notebook, but with the added benefit of customizing the page in ways you can't do on paper - ask the program to lay ruled paper here, grid paper there, paste an image, insert a recording from the built-in camera, try to do handwriting recognition on the tablet input, insert some latex for neat formulas and so on. I'm pretty interested in developing it just to see if writing notes on computer can come anywhere close to the comfort plain paper + pencil offer (hard to do IMO) and can always turn it in as a university C++ project, so double gain there. Coming from the type of project there are certain requirements for the user interface: the user will be able to zoom, move and rotate the notebook as he wishes and I think it's pretty sensible delegate it to OpenGL, so the prospective GUI needs to work well with OGL (preferably being rendered in it) the interface should be navigable with as little of keyboard input as user wishes (incorporating some sort of gestures maybe) up to limiting the keyboard keys as modifiers to the pen movements and taps; this includes tablet and possible multitouch support the interface should keep out of the way where not needed and come up where needed and be easily layerable the notebook sheet itself will be a container for objects representing the notebook blurbs, so it would be nice if the GUI would be able to overlay some frames over the exact parts of the OpenGL-drawn sheet to signify what can be done with given part (like moving, rotating, deleting, copying, editing etc.) and it's extents In terms of interface it's probably going to end up similar to Alias' Sketch Book Pro: [picture][http://1.bp.blogspot.com/_GGxlzvZW-CY/SeKYA_oBdSI/AAAAAAAAErE/J6A0kyXiuqA/s400/Autodesk_Alias_SketchBook_Pro_2.jpg] As far as toolkits go I'm considering Qt and nui, but I'm not really aware how well would they match up the requirements and how well would they handle such an application. As far as I know you can somehow coerce Qt into doing widget drawing with OpenGL, but on the other hand I heard voices it's slot-signal framework isn't exactly optimal and requires it's own preprocessor and I don't know how hard would be to do all the custom widgets I would need (say color-wheel, ruler, blurb frames, blurb selection, tablet-targeted pop-up menu etc.) in the constraints of Qt. Also quite a few Qt programs I've had on my machine seemed really sluggish, but it may be attributed to me having old PC or programmers using Qt suboptimally rather to the framework itself. As for [nui][http://www.libnui.net/] I know it's also cross-platform and all of the basic things you would require of a GUI toolkit and what is the biggest plus it is OpenGL-enabled from the start, but I don't know how it is with custom widgets and other facets and it certainly has smaller userbase and less elaborate documentation than Qt. The question goes as this: Does any of these toolkits fulfill (preferably all of) the requirements or there is a well fitting toolkit I haven't come across or maybe I should just roll up my sleeves, get SFML (or maybe Clutter would be more suited to this?) and something like FastDelegates or libsigc++ and program the GUI framework from the ground up myself? I would be very glad if anyone had experience with a similar GUI project and can offer some comments on how well these toolkits hold up or is it worthwhile to pursue own GUI toolkit in this case. Sorry for longwindedness, duh.

    Read the article

  • C# ATM Bank coding help needed please

    - by user1735692
    if anyone can help with with I would be grateful. I am trying to make a program in c# that acts like an ATM with withdrawing, depositing money, displayed in Program.cs that is connected to Account.cs linked class programs. At the moment it works if I manually input the data and tell it what to display, but I what to do is - Allow users to enter amounts to deposit and withdraw using overloaded implementations of the methods makeDeposit and makeWithdrawal. I have tried many things, and can not get it to work, if anyone can help, I would be grateful if anyone can, thanks again Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Tut9 { class Program { static void Main(string[] args) { Account myAcc = new Account(); myAcc.makeDeposit(10000); myAcc.showBalance(); Console.WriteLine("Attempting to withdraw £" + 90); myAcc.makeWithdrawal(90); myAcc.showBalance(); myAcc.giveOverdraft(50); myAcc.showBalance(); Account student = new Account(30, -100); student.giveOverdraft(-500); } } } Account.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Tut9 { class Account { ////Need to know the balance & ovedraft private int balance; private int overdraft; ////Constructor public Account() { balance = 0; overdraft = 0; } public Account(int initial) { balance = initial; } public Account(int intial, int over) { balance = intial; overdraft = over; } public void giveOverdraft(int amount) { overdraft = amount; } ////Method to display the balance & overdraft public void showBalance() { Console.WriteLine("The balance is now £" + balance); if (overdraft != 0) { Console.WriteLine("You have an overdraft of £" + overdraft); } } ////Method to make a withdrawl public void makeWithdrawal(int y) { balance = balance - y; Console.WriteLine("Withdrew £" + y); } ////Method to make deposit public void makeDeposit(int x) { balance = balance + x; Console.WriteLine("Desposited £" + x); } } }

    Read the article

  • C# setting case constant expressions, do they have to follow a specific order?

    - by Umeed
    Say I'm making a simple program, and the user is in the menu. And the menu options are 1 3 5 7 (i wouldn't actually do that but lets just go with it). and I want to make my switch statement using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace DecisionMaking2 { class Program { static void Main(string[] args) { Console.WriteLine("Please choose an option: "); string SelectedOpt = Console.ReadLine(); double Selection = Convert.ToDouble(SelectedOpt); double MenuOption = (Selection); switch (MenuOption) { case 1: Console.WriteLine("Selected option #1"); break; case 2: Console.WriteLine("Selected option #3"); break; case 3: Console.WriteLine("Selected option #5"); break; case 4: Console.WriteLine("Selected option #7"); break; default: Console.WriteLine("Please choose from the options List!"); break; } } } } would that work? or would I have to name each case constant expression the option number I am using? I went to the microsoft website and I didn't quite pick up on anything i was looking for. . Also while I have your attention, how would I make it so the user chooses from either option and because I don't know which option the user will select " double MenuOption = " could be anything, whatever the user inputs right? so would what I have even work? I am doing this all by hand, and don't get much lab time to work on this as I have tons of other courses to work on and then a boring job to go to, and my PC at home has a restarting issue lol. soo any and all help is greatly appreciated. p.s the computer I'm on right now posting this, doesn't have any compilers, coding programs, and it's not mine just to get that out of the way. Thanks again!

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

< Previous Page | 984 985 986 987 988 989 990 991 992 993 994 995  | Next Page >