Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1207/1620 | < Previous Page | 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214  | Next Page >

  • No GUI boot; startx error, I suspect no filesystem corruption.

    - by Dharmaj Soni
    Till yesterday, my Ubuntu 9.10 was working fine. I had watched a movie using vlc. I had also charged my ipod using the laptop. Today, when I started it, I automatically booted into command line. There seems to be no filesystem corruption etc as I can view/open (text) files. Before the CLI appeared, the screen blinked with a cursor, then the white Ubuntu logo flashed, and then I got the CLI login prompt. After logging in, if I try startx, to start gnome, I get the following error after a few seconds: giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No such process (errno 3): Server error* The same error comes up, even if I use sudo, or if I change my directory to '/' before using startx, and also when, from the grub, using the recovery mode option to load into CLI, and then trying startx. On trying command 'xinit', I get "Server error" Also, on trying GDM, I get 2 errors. I cannot connect to the internet in this state. Thanks for any help. I am using Dell Inspiron 1440, no special graphics card.

    Read the article

  • Solutions for Project management [closed]

    - by user14416
    The team consists of 3 people. The method of development is Scrum. The language of the project is C++. The project will be under the control of the git system. The start up budget is 0. The following things have to be chosen: Build and Version Numbering Project documentation ( file with the most common info for current stage of the project, which will be changed every time the new version or subversion of the project emerges ) Project management tool ( like Trac or Redmine, I cannot use them, because there is no hosting ) Code documentation ( I consider Doxygen ) The following questions have arisen: What can you add to the above list of the main solutions for project management in the described project? One of three project participants has linux os (No MS Office), one has Windows and MS Office (does not want to use Libre or Open Office), one has Windows, but does not have MS Office. What formats, tools can u suggest using for project documentation? The variant of using online wiki does not fit, it must be files. OneNote mb is a good tool for project management, but because of the reason mentioned above it is not possible. What can you advise? Offer a system for Build and Version Numbering.

    Read the article

  • Why is Samba Access from Windows So Slow?

    - by swalker2001
    I have set up a file server using Ubuntu 12.04 Server. The purpose is to serve several network drives to Windows users that have heretofore been served by numerous NAS drives. I have Samba set up with one share defined so far. I can connect to it fine from my test Windows 7 and Windows XP machines. When I do a directory listing on the share from Windows, it can take up to two minutes to get all the files listed--would have taken about 1.5 seconds when I was using the Buffalo NAS. Sometimes it times out with no response at all. I have used the default smb.conf and simply added the following for the share I have set up so far: [engineering] comment = Ubuntu File Server Share path = /networkdriveshares/engineering browsable = yes guest ok = yes read only = no create mask = 0755 I have tried changing the workgroup setting to the Active Domain name our Windows computer use but didn't notice any difference. The only other change I made to the default smb.conf was adding in the recommended socket settings: SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY Lots of information about slow Samba shares online but I have tried all of the solutions I have found and none have made a lick of difference. If there is no solution, is there a better way to set up a file server to be used by Windows clients?

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

  • unable to install postgres for ubuntu

    - by ramya
    I am trying to install postgresql on ubuntu. I tried installing postgresql using apt-get install postgresql Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: postgresql 0 upgraded, 1 newly installed, 0 to remove and 48 not upgraded. Need to get 0B/23.2kB of archives. After this operation, 57.3kB of additional disk space will be used. debconf: delaying package configuration, since apt-utils is not installed Selecting previously deselected package postgresql. (Reading database ... 42866 files and directories currently installed.) Unpacking postgresql (from .../postgresql_8.4.9-0ubuntu0.10.04_all.deb) ... Setting up postgresql (8.4.9-0ubuntu0.10.04) ... But postgresql is not installed properly. I tried purging and reinstalling it .But I am not able to find a solution. Please help we with this. Thanks, Ramya.

    Read the article

  • Friday Tips #5

    - by Chris Kawalek
    Happy Friday, everyone! Following up on yesterday's post about Oracle VM VirtualBox being selected as the best virtualization solution for 2012 by the readers of Linux Journal, our Friday tip is about that very cool piece of software: Question: How do I move a VM from one machine to another with Oracle VM VirtualBox? Answer by Andy Hall, Product Management Director, Oracle Desktop Virtualization: There are a number of ways to do this, with pros and cons for each. The most reliable approach is to Export and Import virtual machines: From the VirtualBox manager, simply use the File…Export appliance menu and follow the wizard's lead. Move the resulting file(s) to the destination machine; and Import the VM into VirtualBox. This method will take longer and use more disk space than other methods because the configuration files and virtual hard drives are converted into an industry standard format (.ova or .ovf). But an advantage of this approach is that the creator of the virtual appliance can add a license which the importer will see and click-to-accept at import time. This is especially useful for ISVs looking to deliver pre-built, configured and tested appliances to their customers and prospects. Thanks Andy! Remember, if you have a question for us, use Twitter hashtag #AskOracleVirtualization. We'll see you next week! -Chris 

    Read the article

  • How do I start Ubuntu without X server?

    - by Kaare Mikkelsen
    So, I'm trying to install the official nVidia drivers for my fancy graphics card, and they advice disabling the X server before installing, as well as making sure that I can boot without the X server, so as not to wreck anything. However, I seem to be doing something wrong. As I understand it, this should be as simple as changing the runlevel from 2 to 1? (I am aware that all this may simply be me not understanding runlevels) If that is correct, a quick test should be simply typing "sudo init 1" or "sudo telinit 1" in a terminal? Doing that makes the system attempt to shutdown, only it stops at the purple screen with the ubuntu logo and 5 white dots underneath. I haven't observed it get anywhere from there, I always end up holding down the power button. "sudo telinit 3" has not visible effect. Alternatively, I should be able to get there using the recovery mode, activated through the grub menu? I have very little success with that. After picking recovery mode, I am faced with a set of options about how to proceed. Both choosing the one with "network enabled" and "text only", I get a dialog explaining that this will mount my / file system in read/write mode, and whether this is what I want. I choose yes, and it seems to report that my drive is fine (there's a single line of text detailing the state of the partition). And then it stops. I haven't tried letting it sit for more than a few minutes, but presumably this process should be comparable in duration to a regular boot? I am not particularly fond of messing with any .conf-files until I am certain that I can handle things with training wheels on. So, I guess there are two questions: the one in the title, and "how do I start a text-only session without changing defaults?" Thanks in advance :)

    Read the article

  • Android&ndash;Finding your SDK debug certificate MD5 fingerprint using Keytool

    - by Bill Osuch
    I recently upgraded to a new development machine, which means the certificate used to sign my applications during debug changed. Under most circumstances you’ll never notice a difference, but if you’re developing apps using Google’s Maps API you’ll find that your old API key no longer works with the new certificate fingerprint. Google's instructions walk you through retrieving the MD5 fingerprint of your SDK debug certificate - the certificate that you’re probably signing your apps with before publishing, but it doesn't talk much about the Keytool command. The thing to remember is that Keytool is part of Java, not the Android SDK, so you'll never find it searching through your Android and Eclipse directories. Mine is located in C:\Program Files\Java\jdk1.7.0_02\bin so you should find yours somewhere similar. From a command prompt, navigate to this directory and type: keytool -v -list -keystore "C:/Documents and Settings/<user name>/.android/debug.keystore" That’s assuming the path to your debug certificate is in the typical location. If this doesn’t work, you can find out where it’s located in Eclipse by clicking Window –> Preferences –> Android –> Build. There's no need to use the additional commands shown on Google's page. You'll be prompted for a password, just hit enter. The last line shown, Certificate fingerprint, is the key you'll give Google to generate your new Maps API key. Technorati Tags: Android Mapping

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • HTG Explains: What Is Bitcoin, the Virtual Digital Currency?

    - by YatriTrivedi
    Bitcoin is a virtual currency that employs some very interesting principles. Here’s the skinny on what exactly it is and how the fascinating technology behind it works. Disclaimer: This is NOT financial or legal advice. This. Is. NOT. Financial. Or. Legal. Advice. This is not, in any way, shape, or form, financial or legal advice. We’re covering this topic because of the technological implementations it uses and the innovations it attempts to make. If you do anything because of this post, we are not responsible because this is NOT financial or legal advice. ^_^ Latest Features How-To Geek ETC Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 The History Of Operating Systems [Infographic] DriveSafe.ly Reads Your Text Messages Aloud The Likability of Angry Birds [Infographic] Dim an Overly Bright Alarm Clock with a Binder Divider Preliminary List of Keyboard Shortcuts for Unity Now Available Bring a Touch of the Wild West to Your Desktop with the Rango Theme for Windows 7

    Read the article

  • Cross-Platform Migration using Heterogeneous Data Guard

    - by Roy F. Swonger
    Most people think of Data Guard as a disaster recovery solution, and it certainly excels in that role. However, did you know that you can also use Data Guard for platform migration under some conditions? While you would normally have your primary and standby Data Guard systems running on the same OS and hardware platform, there are some heterogeneous combinations of primary and stanby system that are supported by Data Guard Physical Standby. One example of heterogenous Data Guard support is the ability to go between Linux and Windows on many processor architectures. Another is the support for environments that are running HP-UX on both PA-RIsC and Itanium hardware. Brand new in 11.2.0.2 is the ability to have both SPARC Solaris and IBM AIX on Power Systems in the same Data Guard environment. See My Oracle Support note 413484.1 for all the details about supported platform combinations. So, why mention this in an upgrade blog? Simple: much of the time required for a platform migration is usually spent copying files from one system to another. If you are moving between systems that are supported by heterogenous Data Guard, then you can reduce that migration downtime to a matter of minutes. This can be a big win when downtime is at a premium (and isn't downtime always at a premium? In addition, you get the benefit of being able to keep the old and new environments synchronized until you are sure the migration is successful! A great case study of using Data Guard for a technology refresh is located on this OTN page. The case study showing CERN's methodology isn't highlighted as a link on the overview page, but it is clickable. As always, make sure you are fully versed on the details and restrictions by reading the available documentation and MOS notes. Happy migrating!

    Read the article

  • Alienware M17x R3: Possible downclock

    - by Ywen
    I installed recently Kubuntu 11.10 32 bits (had graphics driver issues, wanted to try on 32 bits version) on my new Alienware M17x, with a Core i7-2670QM CPU. Cores are supposed to be clocked at 2.2 GHz, however the output of $ cat /proc/cpuinfo | grep -i "hz" gives me: model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 model name : Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz cpu MHz : 800.000 If useful, the AC adapter is plugged in (yet the ouput is the same when the computer is powered only by the battery) and I have Firefox and Eclipse running. Does /proc/cpuinfo reflect a possible automatic downclock made to save power if processor load is low or is this output abnormal? EDIT: Ok, I checked and yes, the ouput does vary in function of the load. I reach 2.2 GHz when needed. But my following problem remains. I was checking my CPU clocking because I experienced poor performances when reading 720p video files on Ubuntu with VLC or mplayer when on battery (and I believe VLC by default only uses CPU, not GPU to decode), whereas I haven't got such problems with VLC on Windows (which made me think it wasn't coming from a BIOS option, plus every option in the BIOS regarding the CPU is turned ON).

    Read the article

  • JMX Monitoring of GlassFish Servers

    - by tjquinn
    Did you ever wonder what this message in your GlassFish server.log file means? JMXStartupService has started JMXConnector on JMXService URL service:jmx:rmi://192.168.2.102:8686/jndi/rmi://192.168.2.102:8686/jmxrmi It means you can monitor any GlassFish server process, remotely or locally, using any standard Java Management Extensions (JMX) client.  Examples: jconsole or jvisualvm.   Copy the part of the log message that starts with "service:" into the Add JMX Connection dialog of jvisualvm:  or into the New Connection dialog of jconsole: (The full string is truncated in the on-screen display, but if you copied from the server.log and pasted into the form it should all be there.) The examples above are for a DAS, and your host will probably be different.   The server.log files for other GlassFish servers (instances) will have similar log entries giving the JMX connection string to use for those processes.  Look for the host and/or port to be different. Note a few things about security: Here we've assumed you are using the default admin username and password.  If you are not, just enter a valid admin username and password for your installation.  Once connected, you have normal access to all the JVM statistics and controls. You can use JMX clients that support MBeans to view the GlassFish configuration.  When you connect to the DAS, you can also change that configuration, but you can only view configuration when you connect to an instance. To use a JMX client on one system to connect to a GlassFish server running on another system, you need to enable secure admin if you have not already done so: asadmin change-admin-password (respond to the prompts) asadmin enable-secure-admin asadmin restart-domain (as prompted in the output from enable-secure-admin)

    Read the article

  • Semantic Versioning and splitting apart a library, providing a bundled build

    - by Derick Bailey
    I've got a nice, fairly popular JavaScript library that is following Semantic Versioning. The current library has a few dependency libraries, which are available either as separate downloads or as part of a single bundled download. I see a need to head down this path further. I want to extract additional, smaller libraries out of the one larger library. Each of these extracted libraries would be available as separate files, or inside of the one bundled build, again. If I go down this path of extracting the libraries, and providing a bundled version of the final code, does this require a full version change in semantic versioning? Would I have to bump from 1.x to 2.x? My first thought it no: I will not change any public API, so I don't have to change the major version number. But then I wonder... well, I am restructuring a lot of things, even though the final API for the bundled version would be the same. Is there a clear answer from semver on something like this? Do I need to bump first, second or third dot? Or something else?

    Read the article

  • Game-a-Week One

    - by Matt Christian
    Anyone who chats with me on a semi-regular basis knows I am absolutely horrible at completing something from beginnning to end.  Often times I'll begin something, lose interest at some point, and end up moving onto the next thing.  For example, I have 1/2 a full game created, 1/3 of a novel written, and half of a model set created.  Needless to say, unless I have some sort of pressure to finish something I don't stick to it. Recently however one of my online buddies challenged me to create a simple game.  The start date was last Thursday and the final game needed to be delivered by this next Sunday (giving me just over a week).  However, I am going out of town this Friday so will need to deliver it by Thursday, giving me exactly 1 week to develop a game.  Here is what the game needed to include: The player should be able to shoot Shooting things should score points Sounds very simple, but given a single week to produce all art assets plus the game isn't an easy task.  So far I've developed: An animated Main Menu that loads via script files, allows the user to start a new game or exit the game The game is 3D and the player can move around the play area with an 'over-the-shoulder' camera HUD elements are drawn to display the player's current score When the player presses Esc they are shown a pause menu where they can resume the game by pressing Esc again, or quit the game by pressing Space There are also 2 items implemented that don't work perfectly: JigLibX physics library implementation On the main menu there is an arrow symbol that rotates to always point at your mouse I've got 2 days of development left so hopefully I can get collision working, some of the art cleaned up, and some more of the camera functionality working.  Also, I'll need to take some time to package the game up which hopefully shouldn't take too long.

    Read the article

  • Anonymous exposes sensitive bank emails

    - by martin.abrahams
    As expected for quite a while, emails purporting to reveal alleged naughtiness at a major bank have been released today. A bank spokesman says "We are confident that his extravagant assertions are untrue". The BBC report concludes…  “Firms are increasingly concerned about the prospect of disgruntled staff taking caches of sensitive e-mails with them when they leave, said Rami Habal, of security firm Proofpoint. "You can't do anything about people copying the content," he said. But firms can put measures in place, such as revoking encryption keys, which means stolen e-mails become unreadable, he added.” Actually, there is something you can do to guard against copying. While traditional encryption lets authorised recipients make unprotected copies long before you revoke the keys, Oracle IRM provides encryption AND guards against unprotected copies being made. Recipients can be authorised to save protected copies, and cut-and-paste within the scope of a protected workflow or email thread – but can be prevented from saving unprotected copies or pasting to unprotected files and emails.  The IRM audit trail would also help track down attempts to open the protected emails and documents by unauthorised individuals within or beyond your perimeter.

    Read the article

  • Setting XSL-FO XML Schema in Visual Studio

    - by Lukasz Kurylo
    I'm playing lately with an XSL-FO for generating a pdf documents. XSL-FO has a long list of available tags and attributes, which for a new guy who want to create a simple document is a nightmare to find a proper one. Fortunatelly we can set an schema for XSL-FO, so will result in acquire a full intellisense in VS. For a simple *.fo file, we can set the path to the schema directly in file: <?xml version="1.0" encoding="utf-8"?> <fo:root       xmlns:fo="http://www.w3.org/1999/XSL/Format"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation=" http://www.w3.org/1999/XSL/Format http://www.xmlblueprint.com/documents/fop.xsd"> ...   We can of course use the build in VS XML Schemas selector. To use it, we must copy the schema file to the Schemas catalog (defaut path for VS2012 is C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas). Then we can go to Properties of the opened xml/xslt file and set the new added schema to file:                 From now, we should have an enable intellisense as shown below: .

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • Attempting to install netgear N300 Wireless USB Adapter on Ubuntu without a present internet connection

    - by Liz
    Hello Linux/Ubuntu world out there. I don't have internet presently on the desktop I am trying to install the USB wireless adapter on. This seems to be the problem, which if the hardware would work would theoretically fix the problem. I can NOT access the internet via anything but wireless. I am presently on my laptop searching for answers while trying to install this little device. So any advice will have to take that into account. Now I have tried so far, using WINE which does not want to work, I have tried Windows Wireless Drivers which doesn't want to work, I have tried Software Sources, Other Software and it will not acknowledge the cdrom as a repository stating errors like E:Unable to stat the mount point /cdrom/ -stat (2: No such file or directory) However I can open the CD icon on my computer and access and browse the files. The computer can read the CD. I can read the CD. I've tried just plugging it in and seeing if the computer will automatically recognize the hardware, and go from there. That does not work either. I have tested USB port to just verify that the USB port works. It does. My laptop recognizes the hardware, and would easily install the software if I prompted it to. The difference is that my laptop is Vista, and I HATE Vista. Any tips, tricks?

    Read the article

  • OOP oriented PHP app source code samples and advice

    - by abel
    The day I have been dreading has arrived. I never felt OOP or good software design was important(I knew they were important, but I thought I could manage without them.). However having read otherwise almost everywhere on the interwebs, I started dreading the day when my client would ask me for new features in an existing app. The day has come and the pain is unbearable! I have never coded my PHP websites "properly"(PHP is my primary language and the bulk of my work. I am learning Python (using web2py)) I take care that the website doesn't fall apart in a daily use scenario. I code pages like I was creating a list of static html files with bits of "magic code" in each of them(this bugs me a lot). How do I make the whole app more or less a single object? For eg. How do I design the object model for an invoicing app? I use a lot of functions for doing any particular thing in the same fashion throughout the app(for eg. validation, generating ids, calculating taxes etc.). I know the basics of OOP in general. Can anyone point me to source code samples of functional apps written in php? Or can someone provide pointers so I can recode my existing apps in a more modular way.

    Read the article

  • Is there a Source Insight alternative?

    - by hansioux
    I am not a developer, but for my work I trace a lot of codes. It is actually rather difficult reading other people's code, especially for bigger projects. Source Insight is a great application that stores all the symbols in a data base, so you can see a new function being called, click on it and see how the function is written. You can see all the referrer of a object or jump to a caller. You don't need to break the train of thought and think up shell commands just to find these things every time you ran into a new variable/structure/function from some other files. I have it running on WINE, but there are little glitches that sometimes gets in the way. I know people will mention C-scope, I've tried it, but it really isn't the same. So, with so many huge open source projects out there for Ubuntu, are there native tools to help read them efficiently? EDIT: Thanks for the suggestions, but does CODE::BLOCKS or CodeLite provide abilities to see the function that the mouse clicked on without jumping to it, so I can see the caller and callee at the same time?

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • What's the best structure for a repository?

    - by jpmelos
    I've looked into many open source software repositories, and I've found some common elements and somethings people do in different fashion from one another. For example, every repository has a README file, a INSTALL file, a COPYING file and stuff like that. Other things differ: Some projects, like git, have their source code in the root level, while others have the source code in a src/ folder and others, like the Linux kernel, have the source code spread in different folders in root level, that divide code by areas; Some have their tests in a t/ folder, while others in a tests/ folder, or named otherwise; Some have files about submitting patches and who the maintainers are, and those might be inside some Documentation/ or in the root level. Are there recommendations? A best practice? For example: personally, I don't like the code in the root level, git-fashion. It looks messy and confuses one trying to start as a contributor (especially because they have some code inside folders, and scripts in the root level as well, it's really messy). If I were to start a project of my own and wanted to start right from the start, are there recommendations? Best practices? How can I make a clean and clear structure? Thank you!

    Read the article

  • forcing upgrade or how to upgrade packages after update failed

    - by Orosjopie
    How do I continue upgrade or fix upgrade as upgrading in 13.04 to 13.10 failed. I'm currently running terminal in restore mode. I get the error unmet dependencies, but tried to install apt-get -f install, but get this error: unable to fetch some archives. using apt-get update also brings errors for example: failed to fetch http: archive.canonical.com/ubuntu/dists/quantal/release.gpg Could not resolve 'archive.canonical.com. Is it my internet connection? Its currently connected via a network cable and internet is on other computers. If internet on my ubuntu 13.10, how do I switch it on? or how can fix my upgrade problem that I can boot ubuntu normal, backup, then format and reload ubuntu 13.10 properly? I posted this problem that has also linked to this problem I mentioned: upgrading to ubuntu 13.10 failed/crashed. I did manager to install/upgrade some of the files and got ubuntu 13.10 to reboot, but not 100%, as it is slow and the unity desktop is not showing 100%. in trying some of the commands online, I get the error that the activity manager is not installed, when trying to install it, it conflicts with activity-log-manager-common. Please assist.

    Read the article

< Previous Page | 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214  | Next Page >