Search Results

Search found 75614 results on 3025 pages for 'file location'.

Page 872/3025 | < Previous Page | 868 869 870 871 872 873 874 875 876 877 878 879  | Next Page >

  • How to offset particles from point of origin

    - by Sun
    Hi I'm having troubles off setting particles from a point of origin. I want my particles to spread out after a certain radius from a the point of origin. For example, this is what I have right now: All particles emitted from a point of origin. What I want is this: Particles are offset from the point of origin by some amount, i.e after the circle. What is the best way to achieve this? At the moment, I have the point of origin, the position of each particle and its rotation angle. Sorry for the poor illustrations. Edit: I was mistaken, when a particle is created, I have only the point of origin. When the particle is created I am able to calculate the rotation of the particle in the update method after it has moved to a new location using atan2() method. This is how I create/manage particles: Created new particle at enemy ship death location, for every new particle which is added to the list, call Update and Draw to update its position, calculate new angle and draw it.

    Read the article

  • Retrofit WebForms with ASP.NET MVC - NoVa Code Camp 2010.2 Demo

    - by Soe Tun
    Thank you to everyone who attended my Retrofit WebForms with ASP.NET MVC session at NoVa Code 2010.2. It was a fun event for me and I hope you had a great time and learned something from it. I wish I had more time to go over some more important topics in more detail. I *promise* I will be writing blog post series about it since I'll have some vacation time during the December holidays to cover some topics that I didn't get to cover in detail.   Please note that the ".bak" file included in the zip file is a SQL Server Database backup file. You have to restore it on your Database server to run it with the source code demo.   Please feel free to ask me about the demo project through Twitter or from this blog post. I'll be glad to help you out. If you want me to give this presentation at your .NET User Group, please let me know and I'll be honored to speak there also.   Again, thank you all and have a great holiday season. Here is the download link to my Demo project Zip file with the PowerPoint presentation in it. Please let me know if the link doesn't work.

    Read the article

  • Cannot create a neutral unit with a trigger

    - by Xitcod13
    I've been playing around with the starcraft UMS (Use map settings) for a while and usually i figure things out pretty quickly when im stuck. Alas not this time. I'm trying to place a neutral unit (player 12) using a trigger. It refuses to work. I'm using Scmdraft 2.0 as my editor (but i cant get it to work in other editors either) (all neutral units placed before the game starts are visible and all other triggers work fine. Also i created a text msg and it does displays it in-game so the trigger triggers ) For testing I created a trigger that looks like this: Player: neutral (i tried neutral players player 1 and all players as well) Condition: -always Action: -Create *1 Terran Medic* at '*location 022*' for *Neutral* (also tried neutral players) When I start the game nothing happens. Here is what I tried: I tried placing a start location for neutral player (player 12) I tried changing the owner under map properties of player 12 to neutral and computer from unused which was the default. Although it seems like it should be a common enough problem, I don't see it in any FAQ and I cant find anything about it when I Google it. Thanks in advance.

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • Oracle Loader for Hadoop 1.1.0.0.3

    - by mannamal
    We are pleased to announce availability of Oracle Loader for Hadoop 1.1.0.0.3, containing bug fixes and performance improvements to Oracle Loader for Hadoop. The updated product can be downloaded from here: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/big-data-downloads-1451048.html Note that the Oracle Loader for Hadoop 1.1.0.0.3 kit is a complete kit containing the product and bug fixes. Fixes of the earlier version 1.1 patch releases are also included. Upgrading to Oracle Loader for Hadoop 1.1.0.0.3 (from versions 1.1.x): On the Oracle Big Data Appliance:  1. 1.  Upload the new oraloader rpm to the first Oracle Big Data Appliance server.  For example:   /tmp/oraloader-1.1.0.0.3-1.x86_64.rpm 2.     As the root user, use dcli from the first Oracle Big Data Appliance server to copy the new rpm to all nodes. For example:   #dcli -f /tmp/oraloader-1.1.0.0.3-1.x86_64.rpm  -d /tmp/oraloader-replace.rpm 3. 3.  As the root user, use dcli from the first Oracle Big Data Appliance server to replace the old oraloader rpm with the new one.  For example:  #dcli "rpm -e oraloader ; rpm -Uvh /tmp/oraloader-replace.rpm" On other hardware: 1. 1.  Unzip oraloader-1.1.0.0.3.x86_64.zip at <location of install> 2. 2.  Update OLH_HOME to point to <location of install>/oraloader-1.1.0.0.3 

    Read the article

  • Installing Django on Windows

    - by Pranav
    Ever needed to install Django in a Microsoft Windows environment, here is a quick start guide to make that happen: Read through the official Django installation documentation, it might just save you a world of hut down the road. Download Python for your version of Windows. Install Python, my preference here is to put it into the Program Files folder under a folder named Python<Version> Add your chosen Python installation path into your Windows path environment variable. This is an optional step, however it allows you to just type python in the command line and have it fire up the Python interpreter. An easy way of adding it is going into Control Panel, System and into the Environment Variables section. Download Django, you can either download a compressed file or if you’re comfortable with using version control – check it out from the Django Subversion repository. Create a folder named django under your <Python installation directory>\Lib\site-packages\ folder. Using my example above that would have been C:\Program Files\Python25\Lib\site-packages\. If you chose to download the compressed file, open it and extract the contents of the django folder into your newly created folder. If you’d prefer to check it out from Subversion, the normal check out points are http://code.djangoproject.com/svn/django/trunk/ for the latest development copy or a named release which you’ll find under http://code.djangoproject.com/svn/django/tags/releases/. Done, you now have a working Django installation on Windows. At this point, it’d be pertinent to confirm that everything is working properly, which you can do by following the first Django tutorial. The tutorial will make mention of django-admin.py, which is a utility which offers some basic functionality to get you off the ground. The file is located in the bin folder under your Django installation directory. When you need to use it, you can either type in the full path to it or simply add that file path into your environment variables as well. Hope this helps!

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • Differentiating between user script input formats

    - by KChaloux
    I have a .NET project at work that provides a couple of (Iron)Python scripts to the customers, to allow them to customize the output of the program. The application generates code for certain machines, and supports a couple of different formats. Until recently, we only provided a script for one format. We're expanding upon that to include support for the others. If the user is using a script, they select their input script before generating the output code. A script designed for Format1 output is going to cause errors if they're trying to generate Format2 output. I need to deal with this. One option would just be to let the customers use common sense, and if they load the wrong script it will just fail, or worse, produce inaccurate data. I'm inclined to provide a little more protection than that. At the moment I'm considering putting a shebang-style comment line at the top of the script, ala: # OUTPUT - Format1 If the user tries to run a Format2 process with a Format1 script, it will warn them. Alternatively I could create different file extensions for the input scripts that vary by type. The file-type comment approach helps prevent the script from actually loading improperly, at the cost of failing to warn the user until they've already selected it, via a dialog box. Using different file extensions would allow me to cut down on visual clutter when providing a File Dialog, but doesn't actually stop them from loading the wrong script. So I'm really not sure if the right approach is to just leave it alone, or provide some safeguards.

    Read the article

  • Errors when installing updates

    - by user71613
    I am getting the following errors when installing updates. They started to appear after I upgraded my system to 12.04. Errors were encountered while processing: samba-common samba-common-bin samba grub-pc grub-gfxpayload-lists Setting up samba-common (2:3.6.3-2ubuntu2.2) ... perl: error while loading shared libraries: libperl.so.5.12: cannot open shared object file: No such file or directory dpkg: error processing samba-common (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of samba-common-bin: samba-common-bin depends on samba-common (>= 2:3.4.0~pre1-2); however: Package samba-common is not configured yet. dpkg: error processing samba-common-bin (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of samba: samba depends on samba-common (= 2:3.6.3-2ubuntu2.2); however: Package samba-common is not configured yet. samba depends on samba-common-bin; however: Package samba-common-bin is not configured yet. dpkg: error processing samba (--configure): dependency problems - leaving unconfigured Setting up grub-gfxpayload-lists (0.6) ... Setting up grub-pc (1.99-21ubuntu3.1) ... perl: error while loading shared libraries: libperl.so.5.12: cannot open shared object file: No such file or directory dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 127 Any ideas how to fix this?

    Read the article

  • "The volume filesystem root has only..."

    - by jcslzr
    I am having this problem in ubuntu 12.04, but I fin strange that when I go to /tmp it wont allow me to delete some files, with message "Operation not permitted" or "this file could not be handled because you dont have permissions to read it". It is only a PC and I have the root password. I was trying to get at least 2000 MB of free space on the root file system to upgrade to 12.10 and see if that resolved the problem. Currently free space on root file system is 190 MB. This is my output: root@jcsalazar-Vostro-3550:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 7688360 7112824 184984 98% / udev 2009288 4 2009284 1% /dev tmpfs 806636 1024 805612 1% /run none 5120 0 5120 0% /run/lock none 2016584 5316 2011268 1% /run/shm /dev/sda5 472036 255920 191745 58% /boot /dev/sda7 30758848 7085480 22110900 25% /home root@jcsalazar-Vostro-3550:~# sudo parted -l Model: ATA TOSHIBA MK3261GS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 106MB 105MB primary fat16 2 106MB 15.8GB 15.7GB primary ntfs boot 3 15.8GB 278GB 262GB primary ntfs 4 278GB 320GB 41.9GB extended 5 278GB 279GB 499MB logical ext4 6 279GB 287GB 7999MB logical ext4 7 287GB 319GB 32.0GB logical ext4 8 319GB 320GB 1443MB logical linux-swap(v1) I apprecciate any new ideas that can help me. Thnx Carlos

    Read the article

  • What sort of data should be sent for mouse-based movement in a multiplayer game?

    - by Daniel
    I'm new to the Multiplayer Rodeo here so please bear with me... I am just getting started and I'm trying to figure out how to deal with movement. I've looked at the question Best way to implement mouse-based movement in MMOG which gives me a pretty good idea, but I'm still struggling with what kind of data should be sent to the server. If a player is on position [x:0, y:0] and I click with the mouse on [x:40, y:40] to start movement, what information should I send to the server? Should I calculate the position based on velocity on client side and just send the expected location? Or should I send current location and velocity and direction? When the server is updating the clients on the players' whereabouts, should the position be sent only, and the clients expected to interpolate/predict movement, or can the direction sent from the client (instead of just coordinates) be used. My concern(or confusion) is regarding the ping/lag frequency of data update and use of a predictive algorithm, as I'd like the movement to be smooth even with a high latency, and prevent ability to cheat(though that's not the top priority).

    Read the article

  • Black Screen on pressing back button in libgdx

    - by user26384
    In my game when i touch on advertisements and press back button to return on game the i am getting a black screen. I referred this link. I tried to change IosGraphics.java but the change is not reflected in monotouch project. I did the following : Extracted nightly.zip and opened gdx-backend-iosmonotouch-sources From there I changed IosGraphicsjava. I then made a new jar file gdx-backend-iosmonotouch.jar and replaced it with original jar file in the nightly folder. Compressed all the files from nightly folder in .zip file. Used this .zip file to make a new project throuch gdx-setup-ui.jar. I tried to open my project in monotouch and from com-gdx-backendios.dll i found that the changes in IosGraphics are not being reflected. Am I missing something? How do I solve this? I even tried to open gdx-backend-iosmonotouch-sources.jar with winrar and edit IosGraphics.java and save it. Even this didn't work.

    Read the article

  • Deploying InfoPath forms &ndash; idiosyncrasies

    - by PointsToShare
    Well, I have written a sophisticated PowerShell script to expedite the deployment of InfoPath forms - .XSN file.  Along the way by way of trial and error (mostly error and error), I discovered a few little things. Here they are. •    Regardless of how the install command is run – PowerShell or the GUI in Central Admin – SharePoint enwraps the XSN inside a solution – WSP, then installs and deploys the solution. •    The solution is named by concatenating “form-“ with the first 16 characters (or less if the file name is shorter than 16) of the file name and the required WSP at the end. So if the form name was MyInfopathForm.xsn the solution name will be form-MyInfopathForm.wsp, but for WithdrawalOfRequestsForRefund.xsn it will be named form-WithdrawalOfRequ.wsp •    It only gets worse! Had there already been a solution file with the same name, Microsoft appends a three digit number to the name, like MyInfopathForm-123.wsp. Remember a digit is a finger, I suspect a middle finger, so when you deploy the same form – many versions of it, or as it was in my case – testing a script time and again, you’ll end up with many such digit (middle finger) appended solutions, all un-deployed except the last one. This is not a bug. It’s a feature!   Well, there are ways around it. When by hand, remove the solution from the solution store before deploying the form again. In the script I do the same thing. And finally - an important caveat; Make sure that all your form names are unique in the first 16 characters. If you also have a form with the name forWithdrawalOfRequestForRelief.xsn, you’re in trouble! That’s all folks!

    Read the article

  • Run Win7 Guest (raw disk) in Ubuntu (which was installed as Dual Boot on existing Win7)

    - by kingdango
    I installed Ubuntu 12.10 on top of Win 7 as a dual boot (awesome!). I'm hoping to use VirtualBox to run my original Win7 instance as a guest OS under Ubuntu. I found this existing question and followed the directions to no avail. I can get the VMDK file created but when I run it I just get a blank black screen with no additional information and Windows never loads. I see no HD activity or anything that would indicate it's loading. I used this command to create the VMDK file: VBoxManager internalcommands createrawvmdk -filename ~/.VirtualBox/Win7Native.vmdk -rawdisk /dev/sda3 It looks like everything was created correctly but I just get a blank screen when I run the VM. I do get this warning when I boot the VM: VirtualBox - Warning The virtual machine execution may run into and error condition as described below... The medium '/home/XXX/.VirtualBox/Win7Native.vmdk' has a logical size of 583GB but the file system the medium is located on can only handle up to 16GB in theory. We strongly recommend to put all your virtual disk images and the snapshot folder on a proper file system (e.g. etc3) with a sufficient size. ErrorId: Fat Partition Detected Severity: Warning How can I get this working?

    Read the article

  • Upgrading Ubuntu(32 bit) 10.10 -> 11.04 fails and causes a kernel panic on boot

    - by Ubuntu Upgrade
    On Ubuntu 10.10 machine Upgrade to Ubuntu 11.04 using the update manager. The upgrade fails and leaves the system in an unstable state. When I reboot the system I get a kernel panic on boot. The error points to /opt/abc/runtime/lib/libc.so.6. By researching on this I found that there is a third party software abc causes problem. It has it's own runtime(libc) library. In /lib/ directory there is a link file /lib/ld-abc.so.2 ---/opt/abc/runtime/lib/ld-linux.so.2. If we rename this file to /lib/abc.so.2 or remove this file the the upgrade is success. Here is the upgrade log of where it crashes(apt-term.log) ===== Services restarted successfully. Processing triggers for libc-bin ... ldconfig deferred processing now taking place /usr/bin/dpkg: /opt/abc/runtime/lib/libc.so.6: version `GLIBC_2.11' not found (required by /usr/bin/dpkg) /usr/bin/dpkg: /opt/abc/runtime/lib/libc.so.6: version `GLIBC_2.8' not found (required by /lib/libselinux.so.1) ===== Could you please let me know what would be the problem of having a run time link library file in /lib directory. Does the ubuntu upgrade check the 3rd part runtime as well?

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Do input template languages exist?

    - by marczellm
    When I have to create some textual representation of data, I can use a template language, so that my code does not have to worry about the structure of the output file - I can sometimes even write code that's independent of whether the output is XML, LaTeX or any other plain text. A simple example: Template (in separate file): <someXmlTag> $variableName </someXmlTag> Code: Template(temstring).substitute(variableName="value") Result (written to output file): <someXmlTag> value </someXmlTag> I want to do the same, but in the opposite direction. I have XML or plain text or whatever files to input. I want to describe the input structure in a separate file that looks like the input but has these variable declarations in it, and I want to handle it with code that's independent of the structure. Is there a library for this concept? (We usually handle XML input by using an XML parser library to describe the input structure in program code, handle plaintext input by writing regexes in code, and don't handle LaTeX input because LaTeX can't really be parsed.)

    Read the article

  • Using "gedit", a string of errors occours

    - by Kumuluzz
    I'm trying to program some small programs in C in terminal and gedit. But everytime i use gedit then a string of errors occours. When i open a new file nothing happens. But in the exact same moment i save the file, then a string of erros coour. Also if i open an already existing file (not a new one), then when the gedit window opens the old file all the lines of errors are writen. In both cases in less than a second and nothing more happens. An example to the error: "error: line 35272: 0 is wrong flag id". They are all similar to this, except the line number is different. There are like 50 of them. I'm running 11.10, just installed it a couple of days again (yes, i'm a newbie) and i've updated all the files recently. I've tried reinstalling gedit via: sudo apt-get --reinstall install gedit It kinda made it worse, now a lot of the lines are shown twice. So now it goes (this is a copy of the first lines of error): error: line 6787: 0 is wrong flag id error: line 10034: 0 is wrong flag id error: line 10034: 0 is wrong flag id error: line 11351: 0 is wrong flag id error: line 11351: 0 is wrong flag id error: line 11849: 0 is wrong flag id error: line 11849: 0 is wrong flag id error: line 15609: 0 is wrong flag id error: line 15609: 0 is wrong flag id error: line 19814: 0 is wrong flag id

    Read the article

  • Toolset agnostic build server and Silverlight projects

    - by Marko Apfel
    Problem Normally I try to have my continuous integration as most a possible toolset free to ensure that no local stuff could have an impact to my build. My Silverlight app references a special compile target in a folder outside my developer tree: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So I copied the stuff from this folder to a local one and changed the call to this target in my csproj: <Import Project="..\..\..\tools\WebApplications\Microsoft.WebApplication.targets" /> And now Visual Studio Conversion Wizard welcomes my with this: Solution Regardless of which line I write – this conversion comes back again and again, if the line has another form than <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So it seems that there is no simple way to change this behaviour. Workaraound I must accept, that this line must be in the csproj and to run the build the toolset must be copied to the build server at the correct location. So go to your development machine where Visual Studio is installed and copy the folder “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications” to your build server at the equivalent location.   Xmas wishes to Microsoft: Please provide technologies to let us developers bundle all needed stuff for a project in one developer tree. It should be possible that one checkout starts us up! No additional installations regardless whether it is a developing machine or dedicated build or continuous integration server. Silverlight is only one example, code analysis configurations could also be terrible and much more …

    Read the article

  • Trouble installing Java

    - by BRKsays
    I am running Ubuntu 12.04 LTS. I wanted to install Java and so I downloaded the 32-bit self extracting .bin file from http://www.java.com and tried to install it according to their instruction. First I made the file an executable one. Then created /usr/java/. After that I have to run this command: ./jre-7u<version>-linux-i586.bin. But I'm stuck here. My Java version is Java 6 u32. When I enter the command it says "no such file or directory". What to do? Please help. Also I'm trying to install 32-bit Java on my 64-bit Precise. Could that possibly be the problem? I tried to follow second answer by Jonas Christensen. I tried to open it, it says file is an unknown type. I tried the terminal command: ./jre-6u31-linux-i586.bin. But it gave this: Unpacking... Checksumming... Extracting... ./jre-6u32-linux-i586.bin: 86: ./jre-6u32-linux-i586.bin: ./install.sfx.5736: not found Failed to extract the files. Please refer to the Troubleshooting section of the Installation Instructions on the download page for more information.

    Read the article

  • How do I automatically start Clamz with AMZ files for Amazon MP3 downloads?

    - by Takkat
    Chromium can open downloaded files with the default application (e.g. PDF in Evince). In my setup a downloaded .AMZ (for Amazon MP3) always opened with Gedit. However I would like to have all downloaded .amz files to autromatically open with Clamz, a command line tool for downloading that works like a charm. As in Nautilus my .amz files were associated to open with Gedit too I thought it was a good idea to add a clamz.desktop file in ~/.local/share/applications (according to this answer) [Desktop Entry] Encoding=UTF-8 Name=Clamz Comment=Open AMZ files for Amazon MP3 download Exec=/usr/bin/clamz %u Terminal=True Type=Application Icon= Categories=Application; StartupNotify=true MimeType=audio/x-amzxml; NoDisplay=true This lets me choose Clamz as default application in Nautilus. But when opening an .amz file in Nautilus it still does not open with Clamz as expected but is treated as an executable text file instead (note that the executable bit is not set!). Is there any other way to make Chromium or Nautilus always open an .amz file with Clamz? Did I miss to change setting in another place?

    Read the article

  • How do I create the "Gnome-Desktop-Item-Edit" program's launch icon with root privileges and more?

    - by GanZ
    I personally dont prefer running commands in terminal to achieve a task and prefer apps to execute the job. Creating launcher for apps is one such command where I prefer the gnome-desktop-item-edit application for creating launchers. If the gnome package is installed, just searching "create launcher" opens the app. But, it doesnt serve any purpose, because for starters the application cannot create launchers for various apps without root permission and the location where the apps have to be created. Usually the launcher apps with root permission can be created at /usr/share/applications and without root permission at /.local/share/applications. I dont prefer the latter location as it is vulnerable to deletion. Hence, in order to create the launchers through gnome with root, everytime I am forced to open this through terminal using the below command! $ sudo gnome-desktop-item-edit ~/.local/share/applications --create-new I dont want to open terminal everytime I want to create an application launcher on unity! I am able to lock the "Create Launcher" App in the Launcher, but not with root privileges So I want to be able to create the "Create Launcher" app shortcut on unity with default root privileges and for the app to create the launchers at usr/share/applications by default! Please help! P.S. I dont have enough rep points to add screenshots to help with the question!

    Read the article

  • Is there a better way to organize my module tests that avoids an explosion of new source files?

    - by luser droog
    I've got a neat (so I thought) way of having each of my modules produce a unit-test executable if compiled with the -DTESTMODULE flag. This flag guards a main() function that can access all static data and functions in the module, without #including a C file. From the README: -- Modules -- The various modules were written and tested separately before being coupled together to achieve the necessary basic functionality. Each module retains its unit-test, its main() function, guarded by #ifdef TESTMODULE. `make test` will compile and execute all the unit tests, producing copious output, but importantly exitting with an appropriate success or failure code, so the `make test` command will fail if any of the tests fail. Module TOC __________ test obj src header structures CONSTANTS ---- --- --- --- -------------------- m m.o m.c m.h mfile mtab TABSZ s s.o s.c s.h stack STACKSEGSZ v v.o v.c v.h saverec_ f.o f.c f.h file ob ob.o ob.c ob.h object ar ar.o ar.c ar.h array st st.o st.c st.h string di di.o di.c di.h dichead dictionary nm nm.o nm.c nm.h name gc gc.o gc.c gc.h garbage collector itp itp.c itp.h context osunix.o osunix.c osunix.h unix-dependent functions It's compile by a tricky bit of makefile, m:m.c ob.h ob.o err.o $(CORE) itp.o $(OP) cc $(CFLAGS) -DTESTMODULE $(LDLIBS) -o $@ $< err.o ob.o s.o ar.o st.o v.o di.o gc.o nm.o itp.o $(OP) f.o where the module is compiled with its own C file plus every other object file except itself. But it's creating difficulties for the kindly programmer who offered to write the Autotools files for me. So the obvious way to make it "less weird" would be to bust-out all the main functions into separate source files. But, but ... Do I gotta?

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

< Previous Page | 868 869 870 871 872 873 874 875 876 877 878 879  | Next Page >