Search Results

Search found 8190 results on 328 pages for 'switch'.

Page 271/328 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • C++ socket protocol design issue (ring inclusion)

    - by Martin Lauridsen
    So I have these two classes, mpqs_client and client_protocol. The mpqs_client class handles a Boost socket connection to a server (sending and receiving messages with some specific format. Upon receiving a message, it calls a static method, parse_message(..), in the class client_protocol, and this method should analyse the message received and perform some corresponding action. Given some specific input, the parse_message method needs to send some data back to the server. As mentioned, this happens through the class mpqs_client. So I could, from mpqs_client, pass "this" to parse_message(..) in client_protocol. However, this leads to a two-way association relationship between the two classes. Something which I reckon is not desireable. Also, to implement this, I would need to include the other in each one, and this gives me a terrible pain. I am thinking this is more of a design issue. What is the best solution here? Code is posted below. Class mpqs_client: #include "mpqs_client.h" mpqs_client::mpqs_client(boost::asio::io_service& io_service, tcp::resolver::iterator endpoint_iterator) : io_service_(io_service), socket_(io_service) { ... } ... void mpqs_client::write(const network_message& msg) { io_service_.post(boost::bind(&mpqs_client::do_write, this, msg)); } Class client_protocol: #include "../network_message.hpp" #include "../protocol_consts.h" class client_protocol { public: static void parse_message(network_message& msg, mpqs_sieve **instance_, mpqs_client &client_) { ... switch (type) { case MPQS_DATA: ... break; case POLYNOMIAL_DATA: ... break; default: break; } }

    Read the article

  • C programming - How to print numbers with a decimal component using only loops?

    - by californiagrown
    I'm currently taking a basic intro to C programming class, and for our current assignment I am to write a program to convert the number of kilometers to miles using loops--no if-else, switch statements, or any other construct we haven't learned yet are allowed. So basically we can only use loops and some operators. The program will generate three identical tables (starting from 1 kilometer through the input value) for one number input using the while loop for the first set of calculations, the for loop for the second, and the do loop for the third. I've written the entire program, however I'm having a bit of a problem with getting it to recognize an input with a decimal component. Here is what I have for the while loop conversions: #include <stdio.h> #define KM_TO_MILE .62 main (void) { double km, mi, count; printf ("This program converts kilometers to miles.\n"); do { printf ("\nEnter a positive non-zero number"); printf (" of kilometers of the race: "); scanf ("%lf", &km); getchar(); }while (km <= 1); printf ("\n KILOMETERS MILES (while loop)\n"); printf (" ========== =====\n"); count = 1; while (count <= km) { mi = KM_TO_MILE * count; printf ("%8.3lf %14.3lf\n", count, mi); ++count; } getchar(); } The code reads in and converts integers fine, but because the increment only increases by 1 it won't print a number with a decimal component (e.g. 3.2, 22.6, etc.). Can someone point me in the right direction on this? I'd really appreciate any help! :)

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • ListView setOnItemClickListener in ListFragment not working

    - by Siddarth Kaki
    I am developing an app that uses ActionBar tabs to display a list of options through ListFragment. The list (and ListFragment) display without a problem, but the ListView's setOnItemClickListener doesn't seems to work, as nothing happens when an item in the list is clicked. Here's the code for the ListFragment class: package XXX.XXX; public class AboutFrag extends SherlockListFragment { @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.aboutfrag, container, false); ListView lv = (ListView) view.findViewById(android.R.id.list); String[] items = new String[] {"About 1", "About 2", "About 3"}; lv.setAdapter(new ArrayAdapter<String>(getActivity(), R.layout.list_item, items)); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { switch (position) { case 0: Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse("http://google.com")); startActivityForResult(browserIntent, 0); break; case 1: Intent browserIntent2 = new Intent(Intent.ACTION_VIEW, Uri.parse("http://wikipedia.org")); startActivityForResult(browserIntent2, 0); break; case 2: Intent browserIntent3 = new Intent(Intent.ACTION_VIEW, Uri.parse("http:/android.com"); startActivityForResult(browserIntent3, 0); break; } } }); return view; } } I'm assuming it does not work because the class returns the view object, so the FragmentActivity can't run the listener code, so does anyone know how to make this work? By the way, I am using ActionBarSherlock. Thanks in advance!!!

    Read the article

  • iPhone development - app design patterns

    - by occulus
    There are tons of resources concerning coding on the iPhone. Most of them concern "how do I do X", e.g. "setup a navigation controller", or "download text from a URL". All good and fine. What I'm more interested in now are the questions that follow the simpler stuff - how to best structure your complex UI, or your app, or the common problems that arise. To illustrate: a book like "Beginning iPhone 3 Development" tells you how to set up a multi viewcontroller app with an top 'switcher' viewcontroller that switches between views owned by other view controllers. Fine, but you're only told how to do that, and nothing about the problems that can follow: for example, if I use their paradigm to switch to a UINavigationViewController, the Navigation bar ends up too low on the screen, because UINavigationViewController expects to be the topmost UIViewController (apparently). Also, delegate methods (e.g. relating to orientation changes) go to the top switcher view controller, not the actual controller responsible for the current view. I have fixes for these things but they feel like hacks which makes me unhappy and makes me feel like I'm missing something. One productive thing might be to look at some open source iPhone projects (see this question). But aside from that?

    Read the article

  • Getting empty update rectangle in OnPaint after calling InvalidateRect on a layered window

    - by Shawn
    I'm trying to figure out why I've been getting an empty update rectangle when I call InvalidateRect on a transparent window. The idea is that I've drawn something on the window (it gets temporarily switched to have an alpha of 1/255 for the drawing), and then I switch it to full transparent mode (i.e. alpha of 0) in order to interact with the desktop & to be able to move the drawing around the screen on top of the desktop. When I try to move the drawing, I get its bounding rectangle & use it to call InvalidateRect, as such: InvalidateRect(m_hTarget, &winRect, FALSE); I've confirmed that the winRect is indeed correct, and that m_hTarget is the correct window & that its rectangle fully encompasses winRect. I get into the OnPaint handler in the class corresponding to m_hTarget, which is derived from a CWnd. In there, I create a CPaintDC, but when I try to access the update rectangle (dcPaint.m_ps.rcPaint) it's always empty. This rectangle gets passed to a function that determines if we need to update the screen (by using UpdateLayeredWindow in the case of a transparent window). If I hard-code a non-empty rectangle in here, the remaining code works correctly & I am able to move the drawing around the screen. I tried changing the 'FALSE' parameter to 'TRUE' in InvalidateRect, with no effect. I also tried using a standard CDC, and then using BeginPaint/EndPaint method in my OnPaint handler, just to ensure that CPaintDC wasn't doing something odd ... but I got the same results. The code that I'm using was originally designed for opaque windows. If m_hTarget corresponds to an opaque window, the same set of function calls results in the correct (i.e. non-empty) rectangle being passed to OnPaint. Once the window is layered, though, it doesn't seem to work right.

    Read the article

  • Using CGContextDrawTiledImage at different zooms causes massive memory growth

    - by Jacques
    I'm working on app an where there's a view in a zoomable UIScrollView. When the user zooms in or out, I redraw the view that's in the UIScrollView to be nice and sharp. That view has a background image that I draw with CGContextDrawTiledImage. I noticed that memory usage grows every time I switch to a new zoom level. It looks like CGContextDrawTiledImage keeps a cache somewhere of the image scaled to different sizes. So, If I go from 1.0 to 1.1x zoom, memory use grows. Going back to 1.0 doesn't cause it to grow, but then going to 1.05 and then 1.2 causes it to grow twice. Back to 1.1 and no growth. Of course, the zoom level is under user control so I don't have control over how many zoom levels happen. Right now my background image is kind of massive (512x512), so this causes memory usage to grow very quickly. It doesn't show up as a memory leak in Instruments, just additional allocations that never get freed. I've tried to find a way to free the cache that appears to be being created, but no luck. It doesn't seem to respond to low memory warnings, for example. I also tried setting the view's backgroundColor to a UIColor created with colorWithPatternImage, but that doesn't work because I'm doing the scaling by changing the graphics context's CTM, not by setting the view's transform. Any ideas on how to keep memory usage from blowing up?

    Read the article

  • Android game logic problem

    - by semajhan
    I'm currently creating a game and have a problem which I think I know why it is occurring but not entirely sure and even if I knew, don't know how to solve. I have a 2D array 10 x 10 and have a "player" class that takes up a tile. Now, I have created 2 instances of the player and move them around via swiping. Around the edges I have put "walls" that the player cannot walk through and everything works fine, until I remove a wall. Once I remove a wall and move the character/player to the edge of the screen, the player cannot go any further. The problem occurs here, where the second instance of the player is not at the edge of the screen but say 2 tiles from the first instance of "player" who is at the edge. If I try moving them further into the direction of the edge, I understand that the first instance of player wouldn't move or do anything but the second instance of player should still move, but it won't. This is the code that executed when the user swipes: if (player.getArrayX() - 1 != player2.getArrayX()) { player.moveLeft(); } else if (player.getArrayX() - 1 == player2.getArrayX() && player.getArrayY() != player2.getArrayY()) { player.moveLeft(); } if (player2.getArrayX() - 1 != player.getArrayX()) { player2.moveLeft(); } else if (player2.getArrayX() - 1 == player.getArrayX() && player2.getArrayY() != player.getArrayY()) { player2.moveLeft(); } In the player class I have: public void moveLeft() { if (alive) { switch (levelMaster.getLevel1(getArrayX() - 1, getArrayY())) { case 0: break; case 1: subX(); // basically moves player left setArrayX(getArrayX() - 1); // shifts x coord of player 1 within tilemap Log.d("semajhan", "x: " + getArrayX()); break; case 9: subX(); setArrayX(getArrayX() - 1); setAlive(false); break; } } } Any help on the matter or further insight would be greatly appreciated, thanks.

    Read the article

  • Prob comparing pointers and integer in C

    - by Dimitri
    Hi I have a problem with this code. When i am using this function I have no warnings. : void handler(int sig){ switch(sig) { case SIGINT : { click++; fprintf(stdout,"SIGINT recu\n"); if( click == N){ exit(0); } } case SIGALRM : fprintf(stdout,"SIGALRM received\n"); exit(0); case SIGTERM: fprintf(stdout,"SIGTERM received\n"); exit(0); } } But when i rewrite the function with this new version, I have a " comparison between pointer and integer" warning on the if statement: void handler( int sig){ printf("Signal recu\n"); if( signal == SIGINT){ click++; fprintf(stdout,"SIGINT received; Click = %d\n",click); if(click == N){ fprintf(stdout,"Exiting with SIGINT\n"); exit(0); } } else if(signal == SIGALRM){ fprintf(stdout,"SIGALRM received\n"); exit(0); } else if(signal == SIGTERM){ fprintf(stdout,"SIGTERM received\n"); exit(0); } Can someone tell me where is the prob?

    Read the article

  • Is there a way to load an existing connection string for Linq to SQL from an app.config file?

    - by Brian Surowiec
    I'm running into a really annoying problem with my Linq to SQL project. When I add everything in under the web project everything goes as expected and I can tell it to use my existing connection string stored in the web.config file and the Linq code pulls directly from the ConfigurationManager. This all turns ugly once I move the code into its own project. I’ve created an app.config file, put the connection string in there as it was in the web.config but when I try to add another table in the IDE keeps forcing me to either hardcode the connection string or creates a Settings file and puts it in there, which then adds a new entry into the app.config file with a new name. Is there a way keep my Linq code in its own project yet still refer back to my config file without the IDE continuously hardcoding the connection string or creating the Settings file? I’m converting part of my DAL over to use Linq to SQL so I’d like to use the existing connection string that our old code is using as well as keep the value in a common location, and one spot, instead of in a number of spots. Manually changing the mode to WebSettings instead of AppSettings works untill I try to add a new table, then it goes back to hardcoding the value or recreating the Settings file. I also tried to switch the project type to be a web project and then rename my app.config to web.config and then everything works as I’d like it to. I’m just not sure if there are any downfalls to keeping this as a web project since it really isn't one. The project only contains the Linq to SQL code and an implementation of my repository classes. My project layout looks like this Website -connectionString.config -web.config (refers to connectionString.config) Middle Tier -Business Logic -Repository Interfaces -etc. DAL -Linq to SQL code -Existing SPROC code -connectionString.config (linked from the web poject) -app.config (refers to connectionString.config)

    Read the article

  • Can't call function from within onOptionsItemSelected

    - by Kristy Welsh
    public boolean onOptionsItemSelected(MenuItem item) { //check selected menu item switch (item.getItemId()) { case R.id.exit: this.finish(); return true; case R.id.basic: Difficulty = DIFFICULTY_BASIC; Toast.makeText(YogaPosesActivity.this, "Difficulty is Basic", Toast.LENGTH_SHORT).show(); SetImageView(myDbHelper); return true; case R.id.advanced: Toast.makeText(YogaPosesActivity.this, "Difficulty is Advanced", Toast.LENGTH_SHORT).show(); Difficulty = DIFFICULTY_ADVANCED; SetImageView(myDbHelper); return true; case R.id.allPoses: Toast.makeText(YogaPosesActivity.this, "All Poses Will Be Displayed", Toast.LENGTH_SHORT).show(); Difficulty = DIFFICULTY_ADVANCED_AND_BASIC; SetImageView(myDbHelper); return true; default: return super.onOptionsItemSelected(item); } } I get an error when I call the SetImageView function, which was defined out of the OnCreate Activity. Can you not call a function unless it was defined inside the OnCreate? I get a nullPointer Exception when calling the function.

    Read the article

  • How do I fix the alpha value after calling GDI text functions?

    - by Daniel Stutzbach
    I have a application that uses the Aero glass effect, so each pixel has an alpha value in addition to red, green, and blue values. I have one custom-draw control that has a solid white background (alpha = 255). I would like to draw solid text on the control using the GDI text functions. However, these functions set the alpha value to an arbitrary value, causing the text to translucently show whatever window is beneath my application's. After calling rendering the text, I would like to go through all of the pixels in the control and set their alpha value back to 255. What's the best way to do that? I haven't had any luck with the BitBlt, GetPixel, and SetPixel functions. They appear to be oblivious to the alpha value. Here are other solutions that I have considered and rejected: Draw to a bitmap, then copy the bitmap to the device: With this approach, the text rendering does not make use of the characteristics of the monitor (e.g., ClearText). Use GDI+ for text rendering: This application originally used GDI+ for text rendering (before I started working on Aero support). I switched to GDI because of difficulties I encountered trying to accurately measure strings with GDI+. I'd rather not switch back. Set the Aero region to avoid the control in question: My application's window is actually a child window of a different application running in a different process. I don't have direct control over the Aero settings on the top-level window. The application is written in C# using Windows Forms, though I'm not above using Interop to call Win32 API functions.

    Read the article

  • PHP: imagepng is creating inordinately large files

    - by Rafael
    I'm using a simple thumbnailing script I wrote and it's pretty standard: $imgbuffer = imagecreatetruecolor($thumbwidth, $thumbheight); switch($type) { case 1: $image = imagecreatefromgif($img); break; case 2: $image = imagecreatefromjpeg($img); break; case 3: $image = imagecreatefrompng($img); break; case 6: $image = imagecreatefrombmp($img); break; case 15: $image = imagecreatefromwbmp($img); break; default: return log_error("Tried to create thumbnail from $img: not a valid image"); } imagecopyresampled($imgbuffer, $image, 0, 0, 0, 0, $thumbwidth, $thumbheight, $width, $height); $output = imagepng($imgbuffer, "$album/thumbs/$imgname.png", 9); 9 is the lowest quality setting, yet from a 400 x 600 JPEG image (at 56kB) I'm getting a thumbnail 27 kB in size (140 x 140). Using imagejpeg (quality of 80) instead of imagepng it's about 4kB. How can this be, especially at the lowest quality setting for imagepng? I tried using imagecopy instead of imagecopyresampled, and imagecreate instead of the true color version. Unfortunately the images come out mangled somehow. Is there any way to get PNG thumbnails of a reasonably small file size (about 4 kB at 140 x 140)? Or do I have to use JPEG?

    Read the article

  • what is the relation between SIGTSTP and SIGCHLD

    - by Rawhi
    I have tow handlers for each one of them (SIGTSTP, SIGCHLD), the thing is that when I pause a process using SIGTSTP the handler function of SIGCHLD run too. what should I do to prevent this . void ExeExternal(char *args[MAX_ARG], char* cmdString, LIST_ELEMENT** pList, int *Susp_Bg_Pid, int *susp) { int pID, status, w; switch (pID = fork()) { case -1: perror("smash error: >"); break; case 0: // Child Process setpgrp(); execv(args[0], args); execvp(args[0], args); perror("error"); exit(EXIT_FAILURE); break; default: if (cmdString[strlen(cmdString) - 1] != '&') { *Susp_Bg_Pid = pID; *susp = 1; while(*susp); } else { InsertElem(pList, args[0], getpid(), pID, 0); } break; } } signal handlers : void signalHandler(int signal) { int pid, cstatus; if (signal == SIGCHLD) { susp = 0; pid = waitpid(-1, &cstatus, WNOHANG); printf("[[child %d terminated]]\n", pid); DelPID(&JobsList, pid); } } void ctrlZsignal(int signal){ kill(Susp_Bg_Pid, SIGTSTP); susp = 0; printf("\nchild %d suspended\n", Susp_Bg_Pid); } Susp_Bg_Pid used to save the paused process id. susp indicates the state of the "smash" the parent process if it is suspended or not .

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Jquery Mobile app focus-based navigation stops working after switching between pages

    - by nawar
    As much as I would like to expand on the details here, I am not able to find relevant information about the root cause of this problem. I am having this issue with my blackberry Webapp which I built using JQM. After few times of navigation from page to page, the application becomes unresponsive on the destination page and I am not able to scroll up/down using the touchpad. If someone had this problem or some clue to the resolution, then that would be helpful. Edit: after doing some research I was able to narrow down the cause of the issue. I am having an issue with focus-based navigation. As I lose focus on the page elements (buttons, input fields, etc) after few transitions among the pages. Edit I had to switch back to the cursor based navigation as it is much faster and do not have the issue faced by focus-based navigation. I removed the entry: <rim:navigation mode=”focus”/> from the config.xml file I found this entry on the blackberry fourms but it haven't solved my problem despite the fact I upgraded my WebWorks SDK to 2.0 from 1.5 http://supportforums.blackberry.com/t5/Web-and-WebWorks-Development/Focus-based-navigation-hangs-device/td-p/455600 Thanks

    Read the article

  • XCode enum woes

    - by Raconteur
    Hi gang, I thought I had this sorted, but I am still missing something. Very simply, I have a Settings class that hold a DAO (sitting on a plist). I want to have a couple of enums for the settings for convenience and readability, such as GamePlayType and DifficultyLevel. Right now I am defining them in the Settings.h file above the @interface line as such: typedef enum { EASY, NORMAL, HARD } DifficultyLevel; and typedef enum { SET_NUMBER_OF_MOVES, TO_COMPLETION } GamePlayType; If I access them from within the Settings class like: - (int)gridSizeForLOD { switch ([self difficultyLevel]) { case EASY: return GRID_SIZE_EASY; case NORMAL: return GRID_SIZE_NORMAL; case HARD: return GRID_SIZE_HARD; default: return GRID_SIZE_NORMAL; } } everything is fine. But, if I try to access them outside of the Settings class, let's say in my main view controller class, like this: if (([settings gameType] == SET_NUMBER_OF_MOVES) && (numMoves == [settings numMovesForLOD])) { [self showLoseScreen]; } I get errors (like EXC_BAD_ACCESS) or the condition always fails. Am I doing something incorrectly? Also, I should point out that I have this code for the call to gameType (which lives in the Settings class): - (GamePlayType)gameType { return [dao gameType]; } and the DAO implements gameType like this: - (int)gameType { return (settingsContent != nil) ? [[settingsContent objectForKey:@"Game Type"] intValue] : 0; } I know I have the DAO returning an int instead of a GamePlayType, but A) the problem I am describing arose there when I tried to use the "proper" data type, and B) I did not think it would matter since the enum is just a bunch of named ints, right? Any help, greatly appreciated. I really want to understand this thoroughly, and something is eluding me... Cheers, Chris

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Need help extrapolating Java code

    - by Berlioz
    If anyone familiar with Rebecca Wirfs-Brock, she has a piece of Java code found in her book titled, Object Design: Roles, Responsibilities, and Collaborations. Here is the quote Applying Double Dispatch to a Specific Problem To implement the game Rock, Paper, Scissors we need to write code that determines whether one object “beats” another. The game has nine possible outcomes based on the three kinds of objects. The number of interactions is the cross product of the kinds of objects. Case or switch statements are often governed by the type of data that is being operated on. The object-oriented language equivalent is to base its actions on the class of some other object. In Java, it looks like this Here is the piece of Java code on page 16 ' import java.util.*; import java.lang.*; public class Rock { public static void main(String args[]) { } public static boolean beats(GameObject object) { if (object.getClass.getName().equals("Rock")) { result = false; } else if (object.getClass.getName().equals("Paper")) { result = false; } else if(object.getClass.getName().equals("Scissors")) { result = true; } return result; } }' ===This is not a very good solution. First, the receiver needs to know too much about the argument. Second, there is one of these nested conditional statements in each of the three classes. If new kinds of objects could be added to the game, each of the three classes would have to be modified. Can anyone share with me how to get this "less than optimal" piece of code to work in order to see it 'working'. She proceeds to demonstrate a better way, but I will spare you. Thanks

    Read the article

  • How do I add an extra separator to the top of a UITableView?

    - by richt
    Hi, I have a view for the iPhone that is basically split in two, with an informational display in the top half, and a UITableView for selecting actions in the bottom half. The problem is that there is no border or separator above the first cell in the UITableView, so the first item in the list looks funny. How can I add an extra separator at the top of the table, to separate it from the display area above it? Here's the code to build the cells - it's pretty straightforward. The overall layout is handled in a xib. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } switch(indexPath.row) { case 0: { cell.textLabel.text = @"Action 1"; break; } case 1: { cell.textLabel.text = @"Action 2"; break; } // etc....... } return cell; }

    Read the article

  • Microsoft JScript runtime error Object doesn't support this property or method

    - by Darxval
    So i am trying to call this function in my javascript but it gives me the error of "Microsoft JScript runtime error Object doesn't support this property or method" and i cant figure out why. It is occuring when trying to call hmacObj.getHMAC. This is from the jsSHA website: http://jssha.sourceforge.net/ to use the hmac-sha1 algorithm encryption. Thank you! hmacObj = new jsSHA(signature_base_string,"HEX"); signature = hmacObj.getHMAC("hgkghk","HEX","SHA-1","HEX"); Above this i have copied the code from sha.js snippet: function jsSHA(srcString, inputFormat) { /* * Configurable variables. Defaults typically work */ jsSHA.charSize = 8; // Number of Bits Per character (8 for ASCII, 16 for Unicode) jsSHA.b64pad = ""; // base-64 pad character. "=" for strict RFC compliance jsSHA.hexCase = 0; // hex output format. 0 - lowercase; 1 - uppercase var sha1 = null; var sha224 = null; The function it is calling (inside of the jsSHA function) (snippet) this.getHMAC = function (key, inputFormat, variant, outputFormat) { var formatFunc = null; var keyToUse = null; var blockByteSize = null; var blockBitSize = null; var keyWithIPad = []; var keyWithOPad = []; var lastArrayIndex = null; var retVal = null; var keyBinLen = null; var hashBitSize = null; // Validate the output format selection switch (outputFormat) { case "HEX": formatFunc = binb2hex; break; case "B64": formatFunc = binb2b64; break; default: return "FORMAT NOT RECOGNIZED"; }

    Read the article

  • C++ NetUserAdd() not working?

    - by Brett Powell
    I posted earlier about how to do this, and got some great replies, and have managed to get the code written based off the MSDN example. However, it does not seem to be working properly. Its printing out the ERROR_ACCESS_DENIED message, but im not sure why as I am running it as a full admin. I was initially trying to create a USER_PRIV_ADMIN, but the MSDN said it can only use USER_PRIV_USER, but sadly neither work. Im hoping someone can spot a mistake or has an idea. Thanks! void AddRDPUser() { USER_INFO_1 ui; DWORD dwLevel = 1; DWORD dwError = 0; NET_API_STATUS nStatus; ui.usri1_name = L"DummyUserAccount"; ui.usri1_password = L"a2cDz3rQpG8"; //ignored by NetUserAdd //ui.usri1_password_age = -1; ui.usri1_priv = USER_PRIV_USER; //USER_PRIV_ADMIN; ui.usri1_home_dir = NULL; ui.usri1_comment = NULL; ui.usri1_flags = UF_SCRIPT; ui.usri1_script_path = NULL; nStatus = NetUserAdd(NULL, dwLevel, (LPBYTE)&ui, &dwError); switch (nStatus) { case NERR_Success: { Msg("SUCCESS!\n"); break; } case NERR_InvalidComputer: { fprintf(stderr, "A system error has occurred: NERR_InvalidComputer\n"); break; } case NERR_NotPrimary: { fprintf(stderr, "A system error has occurred: NERR_NotPrimary\n"); break; } case NERR_GroupExists: { fprintf(stderr, "A system error has occurred: NERR_GroupExists\n"); break; } case NERR_UserExists: { fprintf(stderr, "A system error has occurred: NERR_UserExists\n"); break; } case NERR_PasswordTooShort: { fprintf(stderr, "A system error has occurred: NERR_PasswordTooShort\n"); break; } case ERROR_ACCESS_DENIED: { fprintf(stderr, "A system error has occurred: ERROR_ACCESS_DENIED\n"); break; } } }

    Read the article

  • Just for fun (C# and C++)...time yourself [closed]

    - by Ted
    Possible Duplicate: What is your solution to the FizzBuzz problem? OK guys this is just for fun, no flamming allowed ! I was reading the following http://www.codinghorror.com/blog/2007/02/why-cant-programmers-program.html and couldn't believe the following sentence... " I've also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution." For those that can't be bothered to read the article, the background is this: ....I set out to develop questions that can identify this kind of developer and came up with a class of questions I call "FizzBuzz Questions" named after a game children often play (or are made to play) in schools in the UK. An example of a Fizz-Buzz question is the following: Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". SO I decided to test myself. I took 5 minutes in C++ and 3mins in c#! So just for fun try it and post your timings + language used! P.S NO UNIT TESTS REQUIRED, NO OUTSOURCING ALLOWED, SWITCH OFF RESHARPER! :-) P.S. If you'd like to post your source then feel free

    Read the article

  • The way cores, processes, and threads work exactly?

    - by unknownthreat
    I need a bit of an advice for understanding how this whole procedure work exactly. If I am incorrect in any part described below, please correct me. In a single core CPU, it runs each process in the OS, jumping around from one process to another to utilize the best of itself. A process can also have many threads, in which the CPU core runs through these threads when it is running on the respective process. Now, on a multiple core CPU, Do the cores run in every process together, or can the cores run separately in different processes at one particular point of time? For instance, you have program A running two threads, can a duo core CPU run both threads of this program? I think the answer should be yes if we are using something like OpenMP. But while the cores are running in this OpenMP-embedded process, can one of the core simply switch to other process? For programs that are created for single core, when running at 100%, why the CPU utilization of each core are distributed? (ex. A duo core CPU of 80% and 20%. The utilization percentage of all cores always add up to 100% for this case.) Do the cores try help each other run each thread of each process in some ways? Frankly, I'm not sure how this works exactly. Any advice is appreciated.

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >