Search Results

Search found 7902 results on 317 pages for 'haskell platform'.

Page 271/317 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • using a 64-bit compiler in microsoft visual c++

    - by Ben
    this question is essentially identical to an earlier question i had that didn't receive any answers. hopefully someone can help me out this time. i am trying to compile a vc++ project as 64 bit using visual c++ express 2010. i know that the 64 bit compiler does not come with the default installation of vc++ express so i installed windows sdk for windows 7 as specified here (http://msdn.microsoft.com/en-us/library/9yb4317s.aspx) which includes the 64 bit compiler as i understand. however, there is still no 64 bit option in the configuration manager for vc++. after some searching i found and completed this tutorial (http://jenshuebel.wordpress.com/2009/02/12/visual-c-2008-express-edition-and-64-bit-targets/) as well as the various links at the bottom of this page. despite all my efforts, i still cannot get the 64 bit compiler to show in vc++ (i.e. the 64 bit compiler won't show under "active solutions platform" in the configuration manager). if anyone has any experience/tips with getting this to work i would really appreciate it. fyi - i am running windows 7(x64).

    Read the article

  • Advice on whether to use native C++ DLL or not: PINVOKE & Marshaling ?

    - by Bob
    What's the best way to do this....? I have some Native C++ code that uses a lot of Win32 calls together with byte buffers (allocated using HeapAlloc). I'd like to extend the code and make a C# GUI...and maybe later use a basic Win32 GUI (for use where there is no .Net and limited MFC support). (A) I could just re-write the code in C# and use multiple PINVOKEs....but even with the PINVOKES in a separate class, the code looks messy with all the marshaling. I'm also re-writing a lot of code. (B) I could create a native C++ DLL and use PINVOKE to marshal in the native data structures. I'm assuming I can include the native C++ DLL/LIB in a project using C#? (C) Create a mixed mode DLL (Native C++ class plus managed ref class). I'm assuming that this would make it easier to use the managed ref class in C#......but is this the case? Will the managed class handle all the marshaling? Can I use this mixed mode DLL on a platform with no .Net (i.e. still access the native C++ unmanaged component) or do I limit myself to .Net only platforms. One thing that bothers me about each of these options is all the marshalling. Is it better to create a managed data structure (array, string etc.) and pass that to the native C++ class, or, the other way around? Any ideas on what would be considered best practice...?

    Read the article

  • Static code analysis for new language. Where to start?

    - by tinny
    I just been given a new assignment which looks like its going to be an interesting challenge. The customer is wanting a code style checking tool to be developed for their internal (soon to be open sourced) programming language which runs on the JVM. The language syntax is very Java like. The customer basically wants me to produce something like checkstyle. So my question is this, how would you approach this problem? Given a clean slate what recommendations would you make to the customer? I think I have 3 options Write something from scratch. Id prefer not to do this as it seems like this sort of code analysis tool problem has been solved so many times that there must be a more "framework" or "platform" orientated approach. Fork an existing code style checking tool and modify the parsing to fit with this new language etc etc Extend or plug into an existing static code analysis tool. (maybe write a plugin for Yasca?) Maybe you would like to share your experiences in this area? Thanks for reading

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Why does first call to java.io.File.createTempFile(String,String,File) take 5 seconds on Citrix?

    - by Ben Roling
    While debugging slow startup of an Eclipse RCP app on a Citrix server, I came to find out that java.io.createTempFile(String,String,File) is taking 5 seconds. It does this only on the first execution and only for certain user accounts. Specifically, I am noticing it Citrix anonymous user accounts. I have not tried many other types of accounts, but this behavior is not exhibited with an administrator account. Also, it does not matter if the user has access to write to the given directory or not. If the user does not have access, the call will take 5 seconds to fail. If they do have access, the call with take 5 seconds to succeed. This is on a Windows 2003 Server. I've tried Sun's 1.6.0_16 and 1.6.0_19 JREs and see the same behavior. I googled a bit expecting this to be some sort of known issue, but didn't find anything. It seems like someone else would have had to have run into this before. The Eclipse Platform uses File.createTempFile() to test various directories to see if they are writeable during initialization and this issue adds 5 seconds to the startup time of our application. I imagine somebody has run into this before and might have some insight. Here is sample code I executed to see that it is indeed this call that is consuming the time. I also tried it with a second call to createTempFile and notice that subsequent calls return nearly instantaneously. public static void main(final String[] args) throws IOException { final File directory = new File(args[0]); final long startTime = System.currentTimeMillis(); File file = null; try { file = File.createTempFile("prefix", "suffix", directory); System.out.println(file.getAbsolutePath()); } finally { System.out.println(System.currentTimeMillis() - startTime); if (file != null) { file.delete(); } } } Sample output of this program is the following: C:\java.exe -jar filetest.jar C:/Temp C:\Temp\prefix8098550723198856667suffix 5093

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • My iPhone app ran fine in simulator but crashed on device (iPod touch 3.1.2) test, I got the followi

    - by Mickey Shine
    I was running myapp on an iPod touch and I noticed it missed some libraries. Is that the reason? [Session started at 2010-03-19 15:57:04 +0800.] GNU gdb 6.3.50-20050815 (Apple version gdb-1128) (Fri Dec 18 10:08:53 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys007 Loading program into debugger… Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-237-78 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 11779] [Switching to thread 11779] sharedlibrary apply-load-rules all (gdb) continue warning: Unable to read symbols for "/Library/MobileSubstrate/MobileSubstrate.dylib" (file not found). 2010-03-19 15:57:18.892 myapp[2338:207] MS:Notice: Installing: com.yourcompany.myapp [myapp] (478.52) 2010-03-19 15:57:19.145 myapp[2338:207] MS:Notice: Loading: /Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib warning: Unable to read symbols for "/Library/MobileSubstrate/DynamicLibraries/Backgrounder.dylib" (file not found). warning: Unable to read symbols for "/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.2.sdk/usr/lib/libsubstrate.dylib" (file not found). MS:Warning: message not found [myappAppDelegate applicationWillResignActive:] MS:Warning: message not found [myappAppDelegate applicationDidBecomeActive:] 2010-03-19 15:57:19.550 myapp[2338:207] in FirstViewController 2010-03-19 15:57:20.344 myapp[2338:207] in load table view 2010-03-19 15:57:20.478 myapp[2338:207] in loading splash view 2010-03-19 15:57:22.793 myapp[2338:207] in set interface Program received signal: “0”. warning: check_safe_call: could not restore current frame

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Intent provided by Cursor is not fired correctly (LiveFolders)

    - by Felix
    In my desperation with trying to get LiveFolders working, I have tried the following in my LiveFolder ContentProvider: public Cursor query(Uri uri, String[] projection, String selection, String[] selectionArgs, String sortOrder) { MatrixCursor mc = new MatrixCursor(new String[] { LiveFolders._ID, LiveFolders.NAME, LiveFolders.INTENT } ); Intent i = null; for (int j=0; j < 5; j++) { i = new Intent(Intent.ACTION_VIEW, Uri.parse("http://www.google.com/")); mc.addRow(new Object[] { j, "hello", i} ); } return mc; } Which, in all normalness, should launch the Browser and display the Google homepage when clicking on an item in the LiveFolder. But it doesn't. It gives a Application is not installed on your phone error. No, I'm not defining a base intent for my LiveFolder. logcat says: I/ActivityManager( 74): Starting activity: Intent { act=android.intent.action.VIEW dat=Intent { act=android.intent.action.VIEW dat=http://www.google.com/ } flg=0x10000000 } It seems it embeds the Intent I give it in the data section of the actually fired Intent. Why is it doing this? I'm really starting to believe it's a platform bug.

    Read the article

  • how to clear XFixes regions

    - by ~buratinas
    Hi, I'm writing some low level code for X11 platform. To achieve best data copying performance I use XFixes/XDamage extensions. How can I clear the contents of XFixes region after one refresh cycle? Or do they clean themselves after I use XFixesSetPictureClipRegion? My code is something like that: Display xdpy; XShamPixmap pixmap_; XFixesRegion region_; damage_event_callback(damage_geometry_t geometry, XDamage damage,...) { unsigned curr_region = XFixesCreateRegion(xdpy, 0, 0); XDamageSubtract(xdpy, damage, None, curr_region); XFixesTranslateRegion( xdpy, curr_region, geometry.left(), geometry.top() ); XFixesUnionRegion (xdpy, region_, region_, curr_region); } process_damage_events(...) { XFixesSetPictureClipRegion( xdpy, pixmap_, 0, 0, region_); XCopyArea (xdpy, window_->id(), pixmap_, XDefaultGC(xdpy, XDefaultScreen(xdpy)), 0,0,width(),height(),0,0); /*Should clear region_ here */ ... } Currently I clear the region by deleting and recreating, but I guess it's not the best way to do that.

    Read the article

  • How important is the programming language when you choose a new job?

    - by Luhmann
    We are currently hiring at the company where I work, and here the codebase is in VB.Net. We are worried that we miss out on a lot of brilliant programmers, who would never ever consider working with VB.Net. My own background is Java and C#, and I was somewhat sceptical as to whether it would work out with VB, as - to be honest - i didn't care much for VB. After a month or so, I was completely fluent in VB, and a few months later i discovered to my surprise, that I actually like VB. I still code my free time projects in C# and Boo though. So my question is firstly, how important is language for you, when you choose a new programming job? Lets say if its a great company, salary is good, and generally an attractive work-place. Would you say no to the perfect job, if the language wasn't your preferred dialect? VB or C# is one thing, but how about Java or C# etc. Secondly if the best developers won't join your company because of your language or platform, would you consider changing, to get the right people? (This is not a language bashing thread, so please no religious language wars) NB: This is Community Wiki

    Read the article

  • Why can't I simply copy installed Perl modules to other machines?

    - by pistacchio
    Being very new to Perl but not to dynamic languages, I'm a bit surprised at how not straight forward the manage of modules is. Sure, cpan X does theoretically work, but I'm working on the same project from three different machines and OSs (at work, at home, testing in an external environment). At work (Windows 7) I have problem using cpan because of our firewall that makes ftp unusable At home (Mac OS X) it does work In the external environment (Linux CentOs) it worked after hours because I don't have root access and I had to configure cpan to operate as a non-root user I've tried on another server where I have an access. If the previous external environment is a VPS and so I have a shell access, this other one is a cheap shared hosting where I have no way to install new modules other than the ones pre-installed At the moment I still can't install Template under Windows. I've seen that as an alternative I could compile it and I've also tried ActiveState's PPM but the module is not existent there. Now, my perplexity is about Perl being a dynamic language. I've had all these kind of problems while working, for example, with C where I had to compile all the libraries for all the platform, but I thought that with Perl the approach would have been very similar to Python's or PHP's where in 90% of the cases copying the module in a directory and importing it simply works. So, my question: if Perl's modules are written in Perl, why the copy/paste approach will not work? If some (or some part) of the modules have to be compiled, how to see in CPAN if a module is Perl-only or it relies upon compiled libraries? Isn't there a way to download the module (tar, zip...) and use cpan to deploy it? This would solve my problem under Windows.

    Read the article

  • Something about Stream

    - by sforester
    I've been working on something that make use of streams and I found myself not clear about some stream concepts( you can also view another question posted by me at http://stackoverflow.com/questions/2933923/about-redirected-stdout-in-system-diagnostics-process ). 1.how do you indicate that you have finished writing a stream, writing something like a EOF? 2.follow the previous question, if I have written a EOF(or something like that) to a stream but didn't close the stream, then I want to write something else to the same stream, can I just start writing to it and no more set up required? 3.if a procedure tries to read a stream(like the stdin ) that no one has written anything to it, the reading procedure will be blocked,finally some data arrives and the procedure will just read till the writing is done,which is indicated by getting a return of 0 count of bytes read rather than being blocked, and now if the procedure issues another read to the same stream, it will still get a 0 count and return immediately while I was expecting it will be blocked since no one is writing to the stream now. So does the stream holds different states when the stream is opened but no one has written to it yet and when someone has finished a writing session? I'm using Windows the .net framework if there will by any thing platform specific. Thanks a lot!

    Read the article

  • Changing default compiler in Linux, using SCons

    - by ereOn
    On my Linux platform, I have several versions of gcc. Under usr/bin I have: gcc34 gcc44 gcc Here are some outputs: $ gcc --version gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48) $ gcc44 --version gcc44 (GCC) 4.4.0 20090514 (Red Hat 4.4.0-6) I need to use the 4.4 version of gcc however the default seems to the 4.1 one. I there a way to replace /usr/bin/gcc and make gcc44 the default compiler not using a symlink to /usr/bin/gcc44 ? The reason why I can't use a symlink is because my code will have to be shipped in a RPM package using mock. mock creates a minimal linux installation from scratch and just install the specified dependencies before compiling my code in it. I cannot customize this "minimal installation". Ideally, the perfect solution would be to install an official RPM package that replaces gcc with gcc44 as the default compiler. Is there such a package ? Is this even possible/good ? Additional information I have to use SCons (a make alternative) and it doesn't let me specify the binary to use for gcc. I will also accept any answer that will tell me how to specify the gcc binary in my SConstruct file.

    Read the article

  • Created files on Archos 5 invisible on Windows Xp

    - by user352042
    I am fairly new to Android and this is my first post so I apologise in advance if I am breaking protocol or posting to the wrong board. Please feel free to move this post to somewhere more appropriate if required. I am developing for the 160 Gb Archos 5 Internet tablet. Not ideal as a development platform I know, but customer requirements mean we have no choice. It is running Android 1.6. I have updated the device firmware to the most recent available. Updating the version of Android is not an option at this point. Part of my app's requirement is to write information out to .txt files on the external storage directory so that these can be copied over the USB connection to a Windows XP PC using the Mobile media device (MTP) mode. I have followed all instructions I have come across carefully, eg I check that the storage is available using the technique described at http://developer.android.com/guide/topics/data/data-storage.html#filesExternal. However, althoug the files are created succesfully on the device (I can browse them and open them using the device's File Explorer - they are fine), when I connect the device to a Windows XP computer none of the directories or files I created appear and the size of their parent files suggest they do not exist. I have tried running over the ADB, checked logcat, tried a (signed) release version and even written a second test application which just creates a folder (this behaves the same, ie it creates the folder but this is not visible in Windows Explorer) - nothing anywhere gives me any suggestion as to what the problem might be. If anyone has heard of this before or has any ideas as to what else | could try to fix it please get in touch! We do not have any other devices to test on at the moment, although I hope to remedy this soon, customer permitting.

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • Flash compiler error 1061: Call to a possibly undefined method run... but run exists!

    - by Zane Geiger
    So I've been working on making a game in Processing but I think Flash would be a better way to get more people playing it, so I've decided to learn Flash. The problem is that I keep getting really stupid errors on incredibly simple things. For instance, I want to make a 'Block' object to use in a platform game. So I make a new .as file, name it Block.as, and define the Block class within it like so: package { public class Block { public function Block() { // constructor code } public function run() { } } } I don't want to add the code yet, I just want to ensure that this works. So in my main timeline code, I try to create an instance of the Block object and execute its run method: var block1:Block = new Block(); block1.run(); Every time it gives me this inane error: Scene 1, Layer 'Layer 1', Frame 1, Line 2 1061: Call to a possibly undefined method run through a reference with static type Block. What undefined method!? It's defined RIGHT THERE in Block.as. The class file is even in the same folder and everything. I'm getting REALLY annoyed at how poorly Flash handles such a ridiculously simple project. Does anyone know why Flash hates me?

    Read the article

  • How to force a WebPart appears in all pages of a portal in asp.net?

    - by Mehdi
    Hi, I'm working on a portal/CMS project and (unfortunately) build the foundation on WebParts platform. However I need to provide an option for admin to choose whether a webpart should be display in all pages or not. Finally I've found a nice article from Damon Armstrong that describes a way to store all personalization data of a group of pages into one record. Thus every changes the admin made for a webpart, affect whole pages. But it doesn't seems to be a solution for me because of these reasons: 1- The above solution works for a group of pages; in fact we can select which pages to display all webparts, but we expect reverse: select which webpart to display in all pages. 2- After some data entry and adding webparts on pages, we'll face an issue about massive data size of personalization record that should be serialize and deserialize to display contents of each page. May be it would be solved by writing another custom personalization provider or some hacking on webparts system, but I don't now how. Any Ideas about the problem? Thanks

    Read the article

  • Indices instead of pointers in STL containers?

    - by zvrba
    Due to specific requirements [*], I need a singly-linked list implementation that uses integer indices instead of pointers to link nodes. The indices are always interpreted with respect to a vector containing the list nodes. I thought I might achieve this by defining my own allocator, but looking into the gcc's implementation of , they explicitly use pointers for the link fields in the list nodes (i.e., they do not use the pointer type provided by the allocator): struct _List_node_base { _List_node_base* _M_next; ///< Self-explanatory _List_node_base* _M_prev; ///< Self-explanatory ... } (For this purpose, the allocator interface is also deficient in that it does not define a dereference function; "dereferencing" an integer index always needs a pointer to the underlying storage.) Do you know a library of STL-like data structures (i am mostly in need of singly- and doubly-linked list) that use indices (wrt. a base vector) instead of pointers to link nodes? [*] Saving space: the lists will contain many 32-bit integers. With two pointers per node (STL list is doubly-linked), the overhead is 200%, or 400% on 64-bit platform, not counting the overhead of the default allocator.

    Read the article

  • How can I create an Image in GDI+ from a Base64-Encoded string in C++?

    - by Schnapple
    I have an application, currently written in C#, which can take a Base64-encoded string and turn it into an Image (a TIFF image in this case), and vice versa. In C# this is actually pretty simple. private byte[] ImageToByteArray(Image img) { MemoryStream ms = new MemoryStream(); img.Save(ms, System.Drawing.Imaging.ImageFormat.Tiff); return ms.ToArray(); } private Image byteArrayToImage(byte[] byteArrayIn) { MemoryStream ms = new MemoryStream(byteArrayIn); BinaryWriter bw = new BinaryWriter(ms); bw.Write(byteArrayIn); Image returnImage = Image.FromStream(ms, true, false); return returnImage; } // Convert Image into string byte[] imagebytes = ImageToByteArray(anImage); string Base64EncodedStringImage = Convert.ToBase64String(imagebytes); // Convert string into Image byte[] imagebytes = Convert.FromBase64String(Base64EncodedStringImage); Image anImage = byteArrayToImage(imagebytes); (and, now that I'm looking at it, could be simplified even further) I now have a business need to do this in C++. I'm using GDI+ to draw the graphics (Windows only so far) and I already have code to decode the string in C++ (to another string). What I'm stumbling on, however, is getting the information into an Image object in GDI+. At this point I figure I need either a) A way of converting that Base64-decoded string into an IStream to feed to the Image object's FromStream function b) A way to convert the Base64-encoded string into an IStream to feed to the Image object's FromStream function (so, different code than I'm currently using) c) Some completely different way I'm not thinking of here. My C++ skills are very rusty and I'm also spoiled by the managed .NET platform, so if I'm attacking this all wrong I'm open to suggestions.

    Read the article

  • Boost::Asio - Remove the "null"-character in the end of tcp packets.

    - by shump
    I'm trying to make a simple msn client mostly for fun but also for educational purposes. And I started to try some tcp package sending and receiving using Boost Asio as I want cross-platform support. I have managed to send a "VER"-command and receive it's response. However after I send the following "CVR"-command, Asio casts an "End of file"-error. After some further researching I found by packet sniffing that my tcp packets to the messenger server got an extra "null"-character (Ascii code: 00) at the end of the message. This means that my VER-command gets an extra character in the end which I don't think the messenger server like and therefore shuts down the connection when I try to read the CVR response. This is how my package looks when sniffing it, (it's Payload): (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0a 0a 00 (Char:) VER 1 MSNP15 CVR 0... and this is how Adium(chat client for OS X)'s package looks: (Hex:) 56 45 52 20 31 20 4d 53 4e 50 31 35 20 43 56 52 30 0d 0a (Char:) VER 1 MSNP15 CVR 0.. So my question is if there is any way to remove the null-character in the end of each package, of if I've misunderstood something and used Asio in a wrong way. My write function (slightly edited) looks lite this: int sendVERMessage() { boost::system::error_code ignored_error; char sendBuf[] = "VER 1 MSNP15 CVR0\r\n"; boost::asio::write(socket, boost::asio::buffer(sendBuf), boost::asio::transfer_all(), ignored_error); if(ignored_error) { cout << "Failed to send to host!" << endl; return 1; } cout << "VER message sent!" << endl; return 0; } And here's the main documentation on the msn protocol I'm using. Hope I've been clear enough.

    Read the article

  • Developing Air (Flex) Applications for Android and Desktop

    - by Roaders
    I am an experienced Flex and Air Developer and love Android having owned a G1, a milestone (Droid), a Nexus One, a Galaxy S and now a Nexus S. Understandably I am interested in developing Flex applications for Android. I have just started working through the flex for android in 90 mins tutorial here: http://coenraets.org/flexandroid90/FlexAndroid90Minutes.pdf The very first step says that I have to create a Flex Mobile Project. I was under the impression that the whole point of Air is that the same application could run on many different platforms. I was intending on creating an air app with different skins that could be swapped in and out depending on the platform it was running on. This seems to imply that I will have to compile my Air app once for desktop and once for mobile. This isn't the end of the world but it's not quite how I expected it to work. I suppose that if I am creating mobile specific skins then I may as well create a mobile specific app. Is it possible to create one Air app that will run on both mobile and desktop? Is this a good idea?

    Read the article

  • Where do objects merge/join data in a 3-tier model?

    - by BerggreenDK
    Its probarbly a simple 3-tier problem. I just want to make sure we use the best practice for this and I am not that familiary with the structures yet. We have the 3 tiers: GUI: ASP.NET for Presentation-layer (first platform) BAL: Business-layer will be handling the logic on a webserver in C#, so we both can use it for webforms/MVC + webservices DAL: LINQ to SQL in the Data-layer, returning BusinessObjects not LINQ. DB: The SQL will be Microsoft SQL-server/Express (havent decided yet). Lets think of setup where we have a database of [Persons]. They can all have multiple [Address]es and we have a complete list of all [PostalCode] and corresponding citynames etc. The deal is that we have joined a lot of details from other tables. {Relations}/[tables] [Person]:1 --- N:{PersonAddress}:M --- 1:[Address] [Address]:N --- 1:[PostalCode] Now we want to build the DAL for Person. How should the PersonBO look and when does the joins occure? Is it a business-layer problem to fetch all citynames and possible addressses pr. Person? or should the DAL complete all this before returning the PersonBO to the BAL ? Class PersonBO { public int ID {get;set;} public string Name {get;set;} public List<AddressBO> {get;set;} // Question #1 } // Q1: do we retrieve the objects before returning the PersonBO and should it be an Array instead? or is this totally wrong for n-tier/3-tier?? Class AddressBO { public int ID {get;set;} public string StreetName {get;set;} public int PostalCode {get;set;} // Question #2 } // Q2: do we make the lookup or just leave the PostalCode for later lookup? Can anyone explain in what order to pull which objects? Constructive criticism is very welcome. :o)

    Read the article

  • Developing on a windows machine that interacts with a linux system

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >