Search Results

Search found 14975 results on 599 pages for 'os'.

Page 385/599 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • "Windows was unable to complete the format" - Can't format flashdrive!!!

    - by Jake
    I have a 8GB Sandisk Cruzer USB flash drive that I had ruined a while back trying to make it bootable. Anyway now when I click on it I get a message that it needs to be reformatted to be used. So when I try to do that, the formatting starts and suddenly ends with the message "Windows was unable to complete the format". This happens no matter which file system I choose (NTFS, FAT32, exFat) and also which computer I try this on. I am now on a Windows 7 32 bit OS and before tries it on a Vista Home Basic machine. Anyone know any way around this issue? Many thanks.

    Read the article

  • Can't communicate with Primary DNS Server

    - by horsley
    A computer, with Windows 7, can't access any website by domain suddenly. Whether this computer use a wired link or connect to the WLAN, The fault persists IP and DNS obtained automatically, and seems normal (ipconfig /all return the correct info) I can visit websites by using HTTP proxy The DNS server is available, other computer in my room works properly. I can ping myself, the gateway and any other IP, but domains I can use nslookup and obtain the correct IP info There are some error information in the event log about dns client events explaining the client can not verify the DNS server available Windows network diagnosis explain that Windows can't communicate with the device or resource (Primary DNS Server) I guess the dns client should be blame. I tried to do the following things but the fault persist. Reinstall the driver of network adapter Reset TCP/IP (netsh int ip reset) Reset Winsock (netsh winsock reset) Reset LSP I don't want to reinstall the whole os, what should I do?

    Read the article

  • Restoring from .wim image without access to Windows DVD

    - by Steven H
    I'm attempting to fix a friend's computer. It will not boot to anything Windows-related (see my earlier question for more information). I was able to boot into Peppermint OS to back up her files and grab the HP OEM image (.wim) so that I can restore from it (OEM W7 key, so I can't just do a W7 reinstall). However, I cannot figure out what the heck I need to do to be able to actually restore her computer to that image. I tried using these instructions on TechNet to create a WinPE flash drive, but those instructions don't actually make the flash drive bootable, so that option didn't work (the partition is labeled as active, but when trying to boot from it I get the message "Remove disks or other media. Press any key to restart."). All of the other instructions that I found require that I get into WinRE or boot from an install disk, which I cannot do. Any suggestions as to how I can apply this .wim boot image?

    Read the article

  • Windows CE and the Compact Framework are dead?

    - by Valter Minute
    This is one of the question that I’ve been asked more and more frequently at my public speeches and each time I meet customers. The announcement of the new Windows Phone 7 platform and the release of Visual Studio 2010 generated a bit of confusion around Windows CE and some of the technologies it supports. Windows CE is still alive and a lot of good programmers are working on the new releases (I had a chance to know some of them during the MVP summit in February). Here’s a blog post from Olivier Bloch that describes the situation and provides some good news about the OS: http://blogs.msdn.com/obloch/archive/2010/05/03/windows-ce-is-not-dead.aspx As you can read here, Windows Phone 7 keeps its “roots” inside Windows CE. Regarding the .NET Compact Framework, this article from the excellent “I know the answer (it’s 42)” blog from Abhinaba (it seems that we share a passion for photography, Douglas Adams and embedded development), explains that the .NET CF is the foundation of XNA and Silverlight implementation on the WP7 platform: http://blogs.msdn.com/abhinaba/archive/2010/03/18/what-is-netcf.aspx So Windows CE is here to stay, powering one of the most interesting smart phone platforms and ready to power also your devices. Add those blogs to your RSS reader list and stay tuned for more good news about CE and the Compact Framework!

    Read the article

  • Unable to install Ubuntu 12.04.1 on Virtualbox on Windows 7 Host

    - by arcube
    I would like to install Ubuntu 12.04.1 in a virtual box on windows 7 host however I get a black screen after selecting to install Ubuntu i.e. right after choosing English language I have tried the 32bit and 64bit version desktop version, have also read thru some guides e.g. http://www.psychocats.net/ubuntu/virtualbox but I could not find any solution to fix this problem. BTW I have Ubuntu installed using Virtual PC (not running at the same time), also I have mac OS (mountain lion) installed using Virtual Box so I do not think it is a problem with the virtual box or windows 7 host. My VM configuration is with 1 core and 1 GB of ram using 20GB VDI; rest is default config. My hardware is Core i7 3770k / Asus P8Z77-V Deluxe / GSkill 32GB which I believe should be more than sufficient to run another VM. I have moved my user profile from SSD to HDD via junction however IMHO it should not matter. Has anyone else come across this problem and found a solution? Any help/thoughts on how to solve this would be very much appreciated.

    Read the article

  • Installer doesn't display partition I want to install to

    - by Aditya
    While performing a Ubuntu 10.10 installation on my laptop, it doesn't show partitions pertaining to the PC. My PC configuration is as follows : HP Pavilion dv6 - 2020AX AMD Turion II Dual Core Mobile Processor M500 4 GB RAM OS Installed : Windows 7 500 GB Hard drive partitioned as follows : C : 227 GB (Free : 142 GB) D : 11.9 GB (Free : 1.98 GB) - Recovery F : 174 GB (Free : 18 GB) G : 50.5 GB (Free : 50.4 GB) So, I want to perform a Dual-boot installation on my PC, so that Ubuntu resides in the free disk space G:. Therefore, I started the Ubuntu 10.10 installation and select the manual partitioning feature in the installation. However, in the 'Allocate Drive Space' section of the installation, following partitions information is displayed: Partition Type Size Used /dev/sda /dev/sda1      1 MB    unknown /dev/sda2    ntfs    208 MB   unknown /dev/sda3   ntfs    244813 MB    168540 MB /dev/sda4    ntfs    255083 MB   3221 MB where /dev/sda - 500 GB So, what exactly is the problem? What is it should I do to install Ubuntu 10.10 in the G: disk space? Why are the partitions not being shown as the way they should be? Any Suggestions. Thank you for the help.

    Read the article

  • Adding Ubuntu installation to VIsta boot loader

    - by frapfap
    Hi, I had a Vista partition and created a partition and installed Ubuntu 9.10. During the Ubuntu installation I unchecked "Install Boot Loader" so it didn't install the GRUB bootloader. I wanted to keep Vistas boot loader so I could manage it within Vista as I know you can - Ive just forgot where in the Control Panel you do it! Anyway for some reason I incorrectly assumed that the Ubuntu entry would be added to the Vista boot loader. How do enable to choose which OS to use during booting up the computer as at the moment it just automatically loads Vista? Apologies if I'm technically incorrect - what I explained is what I thought was going on!! Thanks.

    Read the article

  • crunchbang: it takes up *how* much memory?!?!

    - by Theo Moore
    I've been trying many distros of Linux lately, trying to find something I like for my netbook. I started out with Ubuntu, and I can tell you I am a big fan. Ubuntu is now fast to install, much simpler to administer, and pretty light resource-wise. My original install was the standard 32 bit version of 9.04. I tried the netbook remix version of this release, but it was very, very slow. Even the full-blown version used only about 200mb. Much better than the almost 800 that the recommended Windows y version took. Once the newest release of Ubuntu was released, I decided to try the netbook remx of 10.04. It used even less RAM; only about 150mb. I thought I'd found my OS. I certainly settled in and prepared to use it forever. Then, someone I know suggested I try cunchbang. It is the most minimalistic UI I've ever seen, using Openbox rather than Gnome or KDE. Very slick, simple and clean. Since I am using the alpha of the most recent version (using Debian Squeeze), the apps provided for you are few...although more will be provided soon. You do have a word processor, etc., although not the OpenOffice you would normally get in Ubuntu. But the best part? 48MB. That's it. 48mb fully loaded, supporting what I can "hotel services". It's fast, boots quick, and believe it or not, I can even do Java-based development....on my netbook! Pretty slick.   More on it as I use it.

    Read the article

  • Database checksum features - redundant? useful?

    - by Eloff
    Just about every mainstream DB has a feature to calculate checksums per page, per sector, or per record. Now for a DB that does full recover after any crash, like PostgreSQL, is a checksum even useful? There will be no data loss as long as the xlog is ok, no matter what kind of corruption happened to the data itself, as the redo log is replayed every committed transaction will be restored. So checksums are useless on restore. Doesn't the filesystem or disk keep checksums anyway to detect corruption? So unless the checksum is per record, all it does is tell you there is corruption - which the OS should be yelling at you the minute you try to read it - so useless in operation? I can't imagine how a checksum can be helpful in any sane database - but since they all use them - I'd say that's just failure of imagination on my part. So how is it useful?

    Read the article

  • Windows 8 cloned drive in 2nd computer

    - by Mark
    I did the Windows 8 Pro upgrade machine w/ 64GB SSD. Finding 64GB not enough, I ordered a 128 GB SSD (Samsung 830) while planning to use CloneZilla to clone the Windows 8 OS to it. I might try using the 64GB SSD (with the Windows 8 upgrade on it) as a boot drive in a backup machine. I understand that I need to do some registry work to make it happy about the SSD 'transplant,' but I am worried about having to register the same activation key on 2 computers. Am I at high risk of getting 'deactivated'? Note that the backup machine is only used when primary computer is off.

    Read the article

  • How can I restore my system from WIM files?

    - by Brian Henk
    I installed another OS on my netbook and decided I want to revert back to Windows 7 Starter. I was careful to keep the recovery partition, but even when I manage to boot to it, the system just restarts a few seconds after selecting "restore." I grabbed all the files from the recovery partition onto a flash drive. I also have been able to use this drive to boot a Windows 7 install, but it was unable to find the recovery partition. These WIM files seem to be the key to installing Windows again. How can I use them?

    Read the article

  • Import LDIF file to external server

    - by colemanm
    As a follow-up to my previous question, which I've resolved part of, what we're trying to do now is take an exported .ldif file of the "Users" container on our OS X Server and import it into a separate OpenLDAP server on an EC2 instance. This we'll use for LDAP user authentication of other apps without having to open our internal network to LDAP traffic. The exported .ldif file thinks the DN of the "Users" container is cn=users,dc=server,dc=domain,dc=com. Is it easiest to configure the EC2 OpenLDAP server to think that it's domain is the same so the container is imported to the proper place? Or should we edit the text of the .ldif file to change the DN to match the external naming? Hopefully that makes sense... but I'm confused as to the best way to accomplish this.

    Read the article

  • Vista Remote Desktop Mouse Drag Not Working

    - by Paul Lynch
    I've been using Remote Desktop to connect from a Windows Vista machine to a Windows XP machine. Everything used to work fine, but a few weeks ago I found that I could not drag things with my mouse. I can click on things just fine, but I cannot move or resize windows or select text with the mouse. I did some experimenting, and it seems that the remote machine behaves as though it gets a mouse up event shortly after it gets the mouse down event, even though I am still holding the button down. On both machines, things work fine outside of Remote Desktop. I did reinstall the OS and software on my Windows Vista machine a couple of months ago, and that might have been about the same time that this problem appeared. I don't frequently use Remote Desktop, so I can't be sure. Does anyone have any suggestions?

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • rsync over SSH with cron in osx-environment

    - by Martin
    I want to automatically download files and folders from a Linux server to which I have an SSH (and FTP) account. The files shall be downloaded on a regular basis (I suppose a cron is the right tool to do so) onto an OS X machine. I tried the following rsync command, which works fine: rsync -avzbe ssh [email protected]:/www/htdocs/something/somefolder /Users/me/folder/foo/ However I have to enter the account's password every time (the SSH account on the server machine). The server is a managed one and I'm afraid I can't change the password. Here are my questions: How do I bypass the entering of the password by storing it somewhere How do I automate this then correctly?

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • Switching from Windows Virtual PC to VirtualBox without formatting

    - by djechelon
    I have a Win7 virtual machine running on Windows Virtual PC where I'm currently developing. I found that I dislike WVPC, and installed VirtualBox, hoping for better performance. However, importing the existing VHD into a new VM seems to not work, because even if I see the Windows boot screen, the OS will crash on a BSOD and requires the restore tool to run. That tool finds no problem, reboots but the BSOD is still present. I wouldn't like to format a new VM if possible. Is it possible to do such switching?

    Read the article

  • Ubuntu 11.10 won't boot on Dell XPS 8300

    - by Phil Gorman
    I have a brand new Dell Studio XPS 8300 desktop with 17-2600 cpu, H67 chipset, 8GB DDR3, 2 1TB HDDs in mirrored RAID, and AMD Radeon 6770. Dell doesn't support Ubuntu here in Australia so it came with Windows 7 and Windows software. Yes I had to pay for an O.S. and software I didn't want to get hardware I did want, all at a greatly inflated price. It's not all beer and skittles in the land of Oz. I changed boot priorities in the BIOS to DVD and ran Ubuntu 11.10 64bit from the ISO with NOMODESET. The installation reformatted all partitions to rid me of the dreaded Windows. All was well until until reboot. The BIOS does its thing, then its "The Black Screen of Death" with a blinking cursor; no boot screen, no Grub, no keyboard, no mouse. I've searched Dell and Ubuntu forums in vain. Can you help? I would be really grateful for any advice which can help turn my big expensive paperweight into a really useful machine. Thank you in anticipation kind people. Phil

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • Safari 5 fails to install on my Mac

    - by Amairani409
    I just got the new Safari 5.0 I downloaded because my Mac told me there was a new version that I should be getting. But when I try to run this new version of the application -- nothing happens! I mean the program seems to be working but nothing appears on the screen and so when I try to see my top sites a little window shows up but it just doesn't show anything. Then 3 seconds later the program shuts down! I don't know why this is happening; ideas? Mac OS X version 10.5.8 2.66ghz intel core 2 duo 4gb 1067 MHz DDR3

    Read the article

  • MacBook (late 2008) EFI Firmware Update 1.8 Problem

    - by user20832
    Last night I applied EFI Firmware update 1.8 to my Macbook (late 2008). After installing EFI Firmware Update and rebooted my computer and installed SuperDrive Firmware Update 3.0. After that my Macbook rebooted and a black screen is showing with "No bootable device. Please insert bootable device and press any key to continue". I put in one of the bootable installation disck (MAC OS) and it's still stuck on the same screen. I have also tried restarting my macbook manually but no luck. Have anyone experienced that?

    Read the article

  • How to install Win7 over top of WinXP partition?

    - by Zeno
    I have a 2TB hard drive with 2 partitions on it, one a C drive for WinXP and another for extra space. I have a Win7 Pro install DVD and I have formatted that C drive via the DVD; it is now a blank "Primary" partition. I attempted to go through the Win7 setup and install it on that partition, but it's giving me an error: Setup unable to create new system partition or locate existing system partition. See setup log files for more info Googling around leads me to believe the entire drive has to be "cleaned" (diskpart) but that would wipe the entire other non-OS partition and I need to keep that data. How can I install Win7 on this blank partition without losing data on the other partition?

    Read the article

  • Oracle Forms Migration Forum - 1/Mar/11 - Lisboa

    - by Claudia Costa
      Modernize o seu Investimento em FormsO Oracle Forms é uma tecnologia de longa data da Oracle que permite desenhar e desenvolver aplicações empresariais de forma rápida e eficiente. Em complemento, o Oracle Reports permite um acesso rápido a toda a informação relevante para o negócio.Como membros da família de produtos Oracle Fusion Middleware, o Oracle Forms e o Oracle Reports garantem grande agilidade, melhor tomada de decisões, mitigação de risco e redução de custos para diversos ambientes de TI.Preserve o seu investimento em FormsO compromisso contínuo da Oracle para com a tecnologia Forms vai permitir-lhe actualizar e reintegrar o investimento já existente. Não só as suas aplicações podem ser implantadas para a Web, como poderão também fazer parte de uma Arquitectura Orientada a Serviços (SOA) construída a partir de Web Services.Nesta sessão de meio dia ficará a conhecer todas as opções de migração de Forms para uma tomada de decisão esclarecida.Iremos abordar os seguintes temas: Modernizar o seu investimento. Tire proveito das mais recentes tecnologias. Escolher o caminho acertado para o seu negócio. Evolução no sentido certo. Preservar activos já existentes. Actualize sem perder o que já investiu. Eleve e modernize o legado Forms na sua organização!Clique aqui para se registar neste evento GRATUITO. Para esclarecimentos adicionais envie-nos um email.   AGENDA Registe-se Agora!

    Read the article

  • How can I send an email from Mail.app to Outlook with an attachment that does not embed into the email body?

    - by JAG2007
    I'm using Mail.app (on Mac OS X 10.6) and when I send an email to users on PC Outlook, with an attached image, they get the email as an image embedded into the body, not as an attachement. I even tried clicking "view as icon" before sending the attachment from Macmail, but that made no difference. I also tried this myself, sending from Mail.app over to my PC's Outlook, and I do get that same problem. In Outlook the image is not coming through as an attachment, but as an image embedded into the body of the email. The reason this is an issue primarily is because the user is then unable to click "save as" and has to actually copy and paste it into some other program, which means the file is converted from jpg or png to the bmp format. But beyond that, most of my recipients don't even know how to copy and paste it into another program to save it that way anyway. They need the "save attachment as" functionality.

    Read the article

  • Microsoft Office Compatibility Pack "The converter failed to open the file" error & "This is a pre-release version"

    - by HaydnWVN
    What issues have people encountered with older OS's (2000, XP) and olders versions of Microsoft Office (2000, XP, 2003) with the 'Microsoft Office Compatibility Pack'? I have a couple of Windows 2000 client PC's encountering different errors when attempting to open .docx or .xlsx documents, some with Office XP and the others with Office 2003. Reading through forums it appears that the different versions of the compatibility pack, not all were compatible with Windows 2000 (versions 3 & 4 are not). There are also Service Packs for the Compatibility Pack. With these Windows 2000 clients, it seems i need the Compatibility Pack version 2, then to install the Service Packs, yet i'm unable to find a link for version 2 of it. First error message: "This is a pre-release version of the Compatibilty Pack and can open pre-release Office 2007 files only." Is solved below. Second error message: "The converter failed to open the file." My troubleshooting is still on-going.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >