Search Results

Search found 1825 results on 73 pages for '64bit'.

Page 16/73 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • MySQL-python 1.2.3 and OS X 10.5: 64- or 32-bit?

    - by Dave Everitt
    I've been happily using Django and MySQL in development on an existing machine running OS X 10.4 Tiger, and have set up a similar environment in 10.5 Leopard on a new 64-bit MacBook, with a working MySQL and Python 2.6.4. However, now I want them to communicate, easy_install MySQL-python gave ld warnings that the file is not of the required architecture, which led me to test my Python 2.4.6 install (from the Mac OS X disc image): >>> import sys >>> sys.maxint 2147483647 Ah. So my Python install appears to be 32-bit and (I think?) won't install MySQL-python for my 64-bit MySQL. There are lots of hacks out there for MySQL-python on OS X (mostly 1.2.2), but - after hours of reading - I'm pretty sure they won't fix this architecture mismatch. So I'm stuck because I can't decide whether to: give up, remove the 64-bit MySQL install (thorough methods, please?) and use the 32-bit MySQL disc image instead; re-install Python in 64-bit mode from the tarball, --with-universal archs-64-bit and --enable-universalsdk= as detailed in Python.org's 2.6 news. So my questions for anyone who has encountered this issue are: Is installing 64-bit Python on OS X 10.5 worth bothering with? If so, (naive, lazy question!) how are the two required arguments combined? If I just skip along in 32-bit (as on my working setup) what am I missing? I'm after a hassle-free install that's easy to reproduce on other machines (possible student use) so I'd really welcome your opinions, please!

    Read the article

  • Problem wit MDAC when trying to compile in VS2008 using x64 bit target platform

    - by grobartn
    I am trying to compile an 32 bit application. I am aware of problems with it but that is why its being compiled on 64 bit version. I am hanging at this problem. Application uses lots of sql stuff. In sqltypes.h file: (provided by MDAC) #ifdef _WIN64 typedef INT64 SQLLEN; typedef UINT64 SQLULEN; typedef UINT64 SQLSETPOSIROW; #else For some reason when its compiled on 32 bit platform it works great But when I try building it on 64 it goes berserk. Error 61 error C2146: syntax error : missing ';' before identifier 'SQLLEN' ..\external\microsoft sdk\include\sqltypes.h 50 It does not recognize INT64, UINT64. Is there something I need to enable so it will work under 64 build process? Missing some #include or #define? Any help would be great Thanks

    Read the article

  • Does the VB6 IDE run on Windows 7 64-bit

    - by jasonk
    We're approaching a point of replacing several of our developer PCs and would like to move up to 64-Bit to maximize the hardware/life of the PCs but we also need to support several legacy VB6 applications. That said, does the VB6 IDE run on Windows 7 64-bit? Microsoft says it's not supported, but that doesn't necessarily mean it doesn't work. Does it work? Are there any pitfalls/workarounds needed to get it running?

    Read the article

  • How does 64 bit code work on OS-X 10.5?

    - by philcolbourn
    I initially thought that 64 bit instructions would not work on OS-X 10.5. I wrote a little test program and compiled it with GCC -m64. I used long long for my 64 bit integers. The assembly instructions used look like they are 64 bit. eg. imultq and movq 8(%rbp),%rax. I seems to work. I am only using printf to display the 64 bit values using %lld. Is this the expected behaviour? Are there any gotcha's that would cause this to fail? Am I allowed to ask multiple questions in a question? Does this work on other OS's?

    Read the article

  • Why is OpenSubKey() returning null on my Win 7 64 bit system?

    - by BrMcMullin
    Has anyone seen OpenSubKey() and other Microsoft.Win32 registry functions return null on 64 bit systems when 32 bit registry keys are under Wow6432node in the registry? I'm working on a unit testing framework that makes a call to OpenSubKey() from the .net library. My dev system is a Win 7 64 bit environment with VS 2008 SP1 and the Win 7 SDK installed. The application we're unit testing is a 32 bit application, so the registry is virtualized under HKLM\Software\Wow6432node. When we call: Registry.LocalMachine.OpenSubKey( @"Software\MyCompany\MyApp\" ); Null is returned, however explicitly stating to look here works: Registry.LocalMachine.OpenSubKey( @"Software\Wow6432node\MyCompany\MyApp\" ); From what I understand this function should be agnostic to 32 bit or 64 bit environments and should know to jump to the virtual node. Even stranger is the fact that the exact same call inside a compiled and installed version of our application is running just fine on the same system and is getting the registry keys necessary to run; which are also being placed in HKLM\Software\Wow6432node. Any suggestions? Thanks in advance!

    Read the article

  • Binary to Date (C#) 64 Bit Format

    - by Veskechky
    We have a binary file from which we have identified the following dates (as Int64). We now the following facts about the Date/Time format; The 64 bit Date has a resolution to the microsecond The 64 bit Date has a range of 4095 years The Int64 9053167636875050944 (0x7DA34FFFFFFFFFC0) = 9th March 2010 The Int64 9053176432968073152 (0x7DA357FFFFFFFFC0) = 10th March 2010 The Int64 9053185229061095360 (0x7DA35FFFFFFFFFC0) = 11th March 2010 The Int64 9053194025154117568 (0x7DA367FFFFFFFFC0) = 12th March 2010 Any help on figuring out a way to convert this to a valid C# Date/Time is appreciated.

    Read the article

  • Upload An Excel File in Classic ASP On Windows 2003 x64 Using Office 2010 Drivers

    - by alphadogg
    So, we are migrating an old web app from a 32-bit server to a newer 64-bit server. The app is basically a Classic ASP app. The pool is set to run in 64-bit and cannot be set to 32-bit due to other components. However, this breaks the old usage of Jet drivers and subsequent parsing of Excel files. After some research, I downloaded the 64-bit version of the new 2010 Office System Driver Beta and installed it. Presumably, this allows one to open and read Excel and CSV files. Here's the snippet of code that errors out. Think I followed the lean guidelines on the download page: Set con = Server.CreateObject("ADODB.Connection") con.ConnectionString = "Provider=Microsoft.ACE.OLEDB.14.0;Data Source=" & strPath & ";Extended Properties=""Excel 14.0;""" con.Open Any ideas why? UPDATE: My apologies. I did forget the important part, the error message: ADODB.Connection error '800a0e7a' Provider cannot be found. It may not be properly installed. /vendor/importZipList2.asp, line 56 I have installed, and uninstalled/reinstalled twice.

    Read the article

  • GraphicsDevice is null in my XNA Windows Game project

    - by indyK1ng
    Hello All, I have just started trying to make a simple game with XNA 3.1 to help myself learn C# and XNA. I have run into a bit of an interesting problem, however. In all of the tutorials, one is supposed to pass GraphicsDevice when instantiating a new spriteBatch object like this: spriteBatch = new SpriteBatch(GraphicsDevice); One might even do this: GraphicsDevice objGraphics = new graphics.GraphicsDevice; spriteBatch = new SpriteBatch(objGraphics); where graphics is the GraphicsDeviceManager. However, no matter which version I try, I always get an ArgumentNullException when I try to pass the GraphicsDevice object to spriteBatch's constructor. Almost every tutorial I have found gives the first one and only one mentioned the second option. Has anyone else run into a similar error or know what could be causing this? I am working in Windows 7 x64 with Visual Studio 2008.

    Read the article

  • ::LookupAccountSid API Extremely Slow When Targetting x64 Platform (Windows 7)

    - by Chris
    During our application startup, we are making a call to ::LookupAccountSid(). When I build targetting the x86 architecture, this call is nearly instantaneous. However, when I target x64 (debug or release), the call generally takes over 40s to complete. Since this is occurring during application startup, the result is fairly unpleasant as it will appear to the user that the application is not launching. I am running Windows 7 Professional 64-bit on a Dell Studio XPS 16 (Intel Core i7 Q 720). Our application is a native Windows application written in C++. My compiler options (CCOPTS) and linker options (LINKOPTS) are as follows: CCOPTS = "/nologo /Gz /W3 /EHs /c /DWIN32 /D_MBCS /Ob1 /vmg /vmv /Zi /MD /DNDEBUG /DDV_BUILD_DLL /DIV_BUILD_DLL /DDVASSERT_EXCEPTION /Zc:wchar_t-" LINKOPTS = "/manifest:no /nologo /machine:X64 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /DEBUG /subsystem:windows /DLL" Any help would be greatly appreciated :D Thanks, --Chris

    Read the article

  • SWT on Windows 64-bit

    - by Palani
    My application throws the exception below. Exception in thread "main" java.lang.UnsatisfiedLinkError: Cannot load 32-bit SW T libraries on 64-bit JVM. How to solve this? What is the name of jar file needed?

    Read the article

  • Segmentation fault when enabling optimization in a simple GTK+ application?

    - by gatopeich
    Might be that it is too late, but I find it at least curious that the following few lines seem to be causing a segmentation fault if and only when compiled with gcc's optimization, even "-O1"! settings_dialog = gtk_dialog_new_with_buttons("gatotray Settings" , NULL, 0, GTK_STOCK_CANCEL, FALSE, GTK_STOCK_SAVE, TRUE, 0); g_signal_connect(G_OBJECT(settings_dialog), "response", G_CALLBACK(gtk_widget_destroy), NULL); g_signal_connect(G_OBJECT(settings_dialog), "destroy", G_CALLBACK(settings_destroyed), NULL); GtkWidget *vb = gtk_dialog_get_content_area(GTK_DIALOG(settings_dialog)); GtkWidget *hb = gtk_hbox_new(FALSE, 3); gtk_container_add(GTK_CONTAINER(hb), gtk_label_new("Background:")); GtkWidget *cb = gtk_color_button_new(); gtk_container_add(GTK_CONTAINER(hb), cb); gtk_container_add(GTK_CONTAINER(vb), hb); This is the backtrace: (gdb) backtrace #0 0x00007ffff4d88052 in ?? () from /lib/libc.so.6 #1 0x00007ffff5304112 in g_strdup () from /lib/libglib-2.0.so.0 #2 0x00007ffff5bc799d in ?? () from /usr/lib/libgobject-2.0.so.0 #3 0x00007ffff5ba826c in g_object_new_valist () from /usr/lib/libgobject-2.0.so.0 #4 0x00007ffff5ba84f1 in g_object_new () from /usr/lib/libgobject-2.0.so.0 #5 0x00007ffff78502d5 in gtk_button_new_from_stock () from /usr/lib/libgtk-x11-2.0.so.0 #6 0x00007ffff787cc95 in gtk_dialog_add_button () from /usr/lib/libgtk-x11-2.0.so.0 #7 0x00007ffff787cd60 in ?? () from /usr/lib/libgtk-x11-2.0.so.0 #8 0x00007ffff787cf60 in gtk_dialog_new_with_buttons () from /usr/lib/libgtk-x11-2.0.so.0 #9 0x0000000000402bb9 in show_settings_dialog () at settings.c:24 #10 0x0000000000403328 in main (argc=1, argv=0x7fffffffe2b8) at gatotray.c:286 ... settings.c:24 is exactly the first line listed above, seems like "gtk_dialog_new_with_buttons" is the culprit... Versions: gcc: 4.4.3 GTK+: 2.20.1 BTW, forgot to mention that commenting out certain lines after the conflictive call prevents it from happening. Particularly the line with "gtk_container_add(GTK_CONTAINER(hb), cb);" I tried almost all suitable combinations of GtkTypes/GTK_MACROS, it makes no difference.

    Read the article

  • What's the big difference between those two binary files?

    - by Lela Dax
    These are two files (contained in the tar.bz2) that were generated using a just-in-time compiler for a game engine. The generated code from ui-linux.bin is from a x86_64 gcc compiler and the ui-windows.bin from the same brand of compiler but targetting win x86_64 (mingw-w64). I've attempted to debug a problem that occurs only on the windows version and i stumbled upon what it seems to be different end-binary code. However, the input assembly code was virtually identical (only difference being pointer representations as int). (there's theoretically no winabi/unixabi conflict since that's taken care of by an attribute flag on certain declarations involved). Any idea what it might be that makes these two binary codes different? The C for the mini-compiler and base assembly producing it appears compatible at first glance. http://www0.org/vm/bins.tar.bz2

    Read the article

  • Excel ODBC and 64 bit server

    - by Causas
    using ASP.NET I need to update an excel template. Our server is running Windows 2008 in 64 bit mode. I am using the following code to access the excel file: ... string connection = @"Provider=MSDASQL;Driver={Microsoft Excel Driver (*.xls)};DBQ=" + path + ";"; ... IF the application pool is set to Enable 32 bit applications the code works as expected; however the oracle driver I am using fails as it is only 64 bit. If Enable 32-bit applications is set to false the excel code fails with the error: Data source name not found and no default driver specified Any suggestions?

    Read the article

  • loading wrong jvm.dll

    - by Brittany
    When I run an executable I created, it uses the jvm.dll from C:\Windows\System32. But I want it to use the jvm.dll in C:\Program Files\Java\jdk1.6.0_17\jre\bin\server. C:\Program Files\Java\jdk1.6.0_17\jre\bin\server is in my PATH environment variable. Does anyone know how to accomplish this? Thanks.

    Read the article

  • 0xDEADBEEF equivalent for 64-bit development?

    - by Peter Mortensen
    For C++ development for 32-bit systems (be it Linux, Mac OS or Windows, PowerPC or x86) I have initialised pointers that would otherwise be undefined (e.g. they can not immediately get a proper value) like so: int *pInt = reinterpret_cast<int *>(0xDEADBEEF); (To save typing and being DRY the right-hand side would normally be in a constant, e.g. BAD_PTR.) If pInt is dereferenced before it gets a proper value then it will crash immediately on most systems (instead of crashing much later when some memory is overwritten or going into a very long loop). Of course the behavior is dependent on the underlying hardware (getting a 4 byte integer from the odd address 0xDEADBEEF from a user process may be perfectly valid), but the crashing has been 100% reliable for all the systems I have developed for so far (Mac OS 68xxx, Mac OS PowerPC, Linux Redhat Pentium, Windows GUI Pentium, Windows console Pentium). For instance on PowerPC it is illegal (bus fault) to fetch a 4 byte integer from an odd address. What is a good value for this on 64-bit systems?

    Read the article

  • Unable to install gem "pg" on Ubuntu 12.10 (AMD64)

    - by Lynx_Eyes
    I've been (unsuccessfully) trying to install the "pg" gem on my ruby 1.9.3-p286 but nothing seems to work. I've already installed postgresql (9.1), libpq-dev and a few others like postgresql-server-dev-9.1. I've tried to pass the "with-pg-config" flag to the gem install but simply nothing seems to work. Every time I try to install the gem it outputs something like this: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... no checking for PQconnectdb() in -llibpq... no checking for PQconnectdb() in -lms/libpq... no Can't find the PostgreSQL client library (libpq) *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/home/lynux/.rvm/rubies/ruby-1.9.3-p286/bin/ruby --with-pg --without-pg --with-pg-dir --without-pg-dir --with-pg-include --without-pg-include=${pg-dir}/include --with-pg-lib --without-pg-lib=${pg-dir}/lib --with-pg-config --without-pg-config --with-pg_config --without-pg_config --with-pqlib --without-pqlib --with-libpqlib --without-libpqlib --with-ms/libpqlib --without-ms/libpqlib Gem files will remain installed in /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1 for inspection. Results logged to /home/lynux/.rvm/gems/ruby-1.9.3-p286@phisiodata/gems/pg-0.14.1/ext/gem_make.out What am I doing wrong? Is there something else that I should do before trying to install the gem? Thank you in advance. [EDIT] Ok, so joelparkerhenderson's answer set me to think that there might me something wrong with paths and libraries and a went on digging a little bit further.. I've found this awesome post and it solved! Basically the problem lies with RVM. So, my problem is solved and for anyone out there that might suffer from the same thing, follow the link!

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • Does IBM WebSphere MQ support a 64 bit client on windows?

    - by orpeles
    Title says it all :) No, seriously, I'm porting a C++ 32 bit application to 64 bit on windows. The application is a client of IBM WebSphere MQ. It uses the MQ client API. Now, as the port progresses, I'm trying to find a 64 bit client. So far, no luck. Does anyone here happen to know if where I can find one, or, god forbid, confirm that there isn't one? Regards, Or

    Read the article

  • How to correctly load 32-bit DLL dependencies when running a program from a batch file

    - by neilwhitaker1
    I have written a tool that references Microsoft.TeamFoundation.VersionControl.Client.dll, which is a 32-bit DLL. When I build my tool on 64-bit Windows, I set Visual Studio to specifically target X86 in order to force it to a 32-bit build. Targetting X86 instead of All-CPU's prevents me from getting a BadImageFormatException, as long as I invoke the tool directly (e.g. by typing "myTool.exe" on the command line). However, if I run a batch file that invokes the tool, I still get the exception. This happens even if the batch file runs in a 32-bit command prompt (%WINDIR%\SysWOW64\cmd.exe). What else can I do to make this work?

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >