Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 344/1953 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Upgrading Beta to full version work without bugs? [closed]

    - by Nicky Bailuc
    Possible Duplicate: I installed an alpha or beta, am I up to date with the final release if I keep upgrading? When the Beta version of 13.4 comes out, I would like to install it and therefore put all my programs, files, and data on it. On the 18th when the original version of 13.4 comes out, will I be able to upgrade the beta into the original without any issues and successfully run it without bugs. I'm asking this because when i upgraded 12.4 to 12.10 it had a lot of glitches to it. Will the 13.4 run the same after upgrading as if I was to install the it directly as it is?

    Read the article

  • launching executable from /usr/local/bin needs access to local file

    - by kedmond
    I have an executable, called "octane", that I want to be able to launch globally. The executable requires access to a local binary, "octane.dat", in order to run. I placed the directory containing the executable and the binary in /opt/ as root and created a symbolic link of the executable in /usr/local/bin/. Now, if I type "octane" anywhere, it launches but throws up an error saying it won't run without the binary, "octane.dat". Octane will only launch if my current working directory is the same as the executable and binary, in /usr/local/bin. Any suggestions on how to fix this? Do I have to make that directory global using .bashrc? Thanks.

    Read the article

  • Dans Guardian install

    - by Matt
    I'm trying to install Dans Guardian on a virtual machine. The instructions ask me to run the ./configure script and then execute the command make install. The configure script runs fine but the make install throws errors. Making all in src make[2]: Entering directory `/webmin/dansguardian-2.10/src' g++ -DHAVE_CONFIG_H -I. -I.. -D__CONFFILE='"/usr/local/etc/dansguardian/dansguardian.conf"' -D__LOGLOCATION='"/usr/local/var/log/dansguardian/"' -D__PIDDIR='"/usr/local/var/run"' -D__PROXYUSER='"nobody"' -D__PROXYGROUP='"nobody"' -D__CONFDIR='"/usr/local/etc/dansguardian"' -g -O2 -MT dansguardian-fancy.o -MD -MP -MF .deps/dansguardian-fancy.Tpo -c -o dansguardian-fancy.o `test -f 'downloadmanagers/fancy.cpp' || echo './'`downloadmanagers/fancy.cpp downloadmanagers/fancy.cpp: In member function âstd::string fancydm::timestring(int)â: downloadmanagers/fancy.cpp:507:72: error: âsnprintfâ was not declared in this scope make[2]: *** [dansguardian-fancy.o] Error 1 make[2]: Leaving directory `/webmin/dansguardian-2.10/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/webmin/dansguardian-2.10' make: *** [all] Error 2 I'm running 12.04 LTS server x64

    Read the article

  • Best design for a memory resident tool

    - by Andrew S.
    I apologize if this tends more toward design that programming, but here goes. What design would you recommend for a database that is Memory resident Must run on windows, linux and (at a stretch) the mac Accept multiple queries simultaneously Have minimum overhead, since a search is expected to take <0.25s This program implements a domain-specific search. Think of it as a database, but one that takes advantage of domain specific information to outperform a convential database search (for example, with custom oracle indexing). We have a custom data structure for our data. Our protoype is a simple exe that constructs the database in memory each time it is run. We were thinking that perhaps this program would suffice, but augmented with sockets so it can listen for queries. This database will be static. Its contents will change infrequently. We expect queries, and the solution, to be delivered via a web service.

    Read the article

  • Fullscreen windowed mode in id games

    - by Oli
    I run a TwinView, dual monitor system. I like to play games fullscreen on one of the monitors, not spanning both. With wine, this works by just setting it to desktop mode and setting the resolution to that of one screen. For OpenTTD, I used Compiz's Window Rules plugin. But I have a few native games that this doesn't work for. Today's experiment involved Prey (Doom 3 engine) but I've had similar issues with other ID engines. So in short: has anybody found a way of having Prey/OpenAreana/Doom3/etc run in windowed mode but with fullscreen decorations (that is to say, no borders and above the panel)?

    Read the article

  • Where is the "pysdm" package?

    - by John Boy
    I am new to Ubuntu. I have some old hardware lying around so I decided to build a backup/storage device. I am trying to follow this lifehacker article. It asks me to open a terminal and run sudo apt-get install pysdm. However, I keep getting Unable to locate package pysdm. Does anyone know where my pysdm is or where I can get one. I have run ubuntu from a usb key and have installed it on a hard drive and get the same message.

    Read the article

  • Build Controller status Unavailable issue in TFS2010

    - by jehan
    I ran into this problem few days back, I was not able to run the builds because the Build Controller was showing Status as Unavailable. It was showing the below exception: There was no endpoint listening at http://fullmachinename:9191/Build/v3.0/Services/Controller/2 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. After trying out few things, I looked at below Build Service Properties.   Then, I did below modifications to the Build Services Properties: 1)      Changed the Local Build Service Endpoint(incoming) from http://machinename.domain.com:9191 to http://machinename:9191 2)      Changed the Connect to Team Project Collection (outgoing) from localhost to machine name. http://localhost:8080/tfs/defaultCollection to http://machinename:8080/tfs/DefaultCollection   After that Started the Build Services and it fixed the issue, the Build Controller was showing Available Status and was able to run the builds.

    Read the article

  • music for an arcade game?

    - by user717572
    I'm thinking about music for my brick breaker game, but I don't know how to choose any. If I'd make a loop from a few seconds, I think it would get annoying very quickly. I also found some longer length tracks (about 2 minutes), but when this is over, it's going to be repeated anyway, just like when you'd select a new level, you'd have to listen to the same beginning of the song again. I can't put an hour of music in my application, so what would you recommend I'd do for the music?

    Read the article

  • Double click executable file and nothing happens

    - by Ralf Tiede
    I'm trying to install a game for Linux called Myth 2. Autorun doesn't run when I insert the CD. When I double-click or right-click and the select "Open" on the Setup file, a box appears saying that it's an executable file, and what I want to do. I click on "Run", but nothing happens after that. I checked the permissions, and it allows it running the executable. How do I install this game? Please break down instructions as much as possible, I'm not used to using commands and Terminal. ;)

    Read the article

  • Bash script won't stay open in background after running through while

    - by jfreak53
    I can't get the following bash script to stay open after the first message is received from NC: #!/bin/bash port=3333 nc -l $port | while read msg; do notify-send Alert "$msg"; done After the first message it exits. I want it to stay open and continue monitoring for new messages from NC. I know that if I launch nc -l port without the while loop it stays open and I can chat away between the two connections even disconnect from the connected host. I am sending the message using: echo 'done' | nc IP port

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • WebLogic Partner Community Newsletter May 2014

    - by JuergenKress
    Dear WebLogic Partner Community member, Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! Thanks to you our WebLogic Specialized Partners Oracle is #1 for Worldwide Market-Share Total Software Revenue in the Application Platform Market Segment for 2013. Want to know why, get the new recipes for Oracle WebLogic 12.1.2. Looking for the right server to run WebLogic – try WebLogic on Oracle Database Appliance 2.9. Want to install WebLogic - Play around with WebLogic Maven Plug-In. Thanks for sharing all the additional WebLogic articles within the community: How to use NodeManager to control WebLogic Servers & Retrieving WebLogic Server Name and Port in ADF Application & Glassfish to WebLogic Migration & Advanced GPIO & Building Robots with Java Embedded & Quick & Dirty How-to Guide: Install GlassFish 4 on Raspberry Pi & New Release: Java Micro Edition (ME) 8. In our Development tool section Frank published Development - Performance and Tuning - Overview in the latest ADF Architecture TV channel. Many of our clients run forms applications, make sure you run it on WebLogic. Thanks for sharing all the additional development tool articles within the community: Using Oracle WebLogic 12c with NetBeans IDE & Consuming SOAP Service & Check Box Support in ADF Query & New release of the ADF EMG Audit Rules & Working with the Array Data Type in a Table & ADF client-side architecture - Select All & Book Review: NetBeans Platform for Beginners See you in Lisbon! To read the complete newsletter please visit http://tinyurl.com/WebLogicNewsMay2014 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • No sound while playing multi-media in Ubuntu 12.04 for XPS15

    - by ved2254
    I have an XPS15 laptop, core i5, 8GB ram. Whenever I login my laptop I here the startup bongo sound. But my sound system just doesn't play anything, may it be a short audio clip or a movie. Output of lshw -c multimedia is : WARNING: you should run this program as super-user. *-multimedia description: Audio device product: 6 Series/C200 Series Chipset Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:51 memory:f1c00000-f1c03fff WARNING: output may be incomplete or inaccurate, you should run this program as super-user. Headphones work just fine but there is no sound from the speakers. Is it a bug in multi-media players or ALSA?

    Read the article

  • PlayOnLinux Unable to find 32bit opengl libraries Dual ATI Videocards

    - by Rodolfo Pires
    Im curently running Ubuntu 12.04 LTS 64Bit So, i installed league of legends fine the first time with the opensource ATI Drivers provided by ubuntu itself with no issues at all, but it runs so slow ... max 20fps because those drivers dont fully support my Dual Graphic cards Than i restored system and i installed the Linux Version of the Proper ATI Drivers from the AMD Website wich supports my APU AMD-A8-4500M with the AMD Radeon 7640G + 7670M Graphics Cards enabling me full performance from my system .. Problem is, to run League of Legends i need a 32bit opengl library, and the driver, automaticly detects a 64bits Linux install and loads the 64bit libraries but not the 32 ones . i need some kind of command, to force the 32bit libraries to load, or to make League of Legends run on the 64 ones .. Im kinda noob to ubuntu .. i installed the 32 bits ones trough terminal and still doesnt work idk why, maybe the driver doesnt want to load them .. plzz help me on this, i dont want to go back to windows just to play league since im noob idk what more details to post here so plz tell me what do you need

    Read the article

  • Why doesn't Wolfram Workbench work on 64-bit Ubuntu?

    - by Ian Hincks
    I have downloaded the shell script (Workbench_2.0.0_LINUX.sh), I have run it as root with it giving no complaints, relevant looking files have appeared in /usr/local/Wolfram/WolframWorkbench/2.0/ and it has created the executable "WolframWorkbench" in /usr/local/bin. However, when I run WolframWorkbench from terminal it spits out /usr/local/bin/WolframWorkbench: 46: exec: /usr/local/Wolfram/WolframWorkbench/2.0/WolframWorkbench: not found That file does indeed exist, and is executable. I have also tried running it directly, and I have also tried running the /usr/local/Wolfram/WolframWorkbench/2.0/Executables/WolframWorkbench too. Is there something I'm missing? (I am running Ubuntu 12.04 64bit with openjdk7)

    Read the article

  • No 'Hardware' tab in audio and no profiles

    - by Gene
    If I run the 12.x ubuntu (latest May 2012) from the CD, I get full audio settings, and sound playing in speaker. Profiles let me change analog to digital in/out. Once I run install from the same CD onto the laptop HD, once it boots the first time, after selecting audio settings, there is no 'Hardware' tab and no way to change profiles. Worst part is the audio device is set to SPDIF so nothing comes out of the speakers. Very off how booting off the CD I can get analog audio, and installing to HD and booting seems to limit the profile to something useless. Laptop is a 5 year old Dell D820 with Nvidea 128meg video on a 1920x1200 screen and T7200 CPU. I suspect if I could get the damn HARDWARE tab back in audio settings, I could just select the proper Analog profile - just as is the case if running from a boot CD. Searched the web, no similar problems found... any help appreciated!

    Read the article

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • Difference between ~/folder and /home/username/folder when creating a path in /etc/environment

    - by r0xx4nne
    I had an executable script on my ubuntu located on ~/project/ directory and I tried to add that path to /etc/environment . So , I edit the path to this PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:~/project/" . Then , I logout and login back , open the terminal as su and run the command to execute my script on that folder but the result is command not found. Then, I change the path in /etc/environment to PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/r0xx4nne/project/" and voila it works.Now i can run the executable script inside ~/project/ without fail under su command. My question is , what's the difference between ~/project and /home/r0xx4nne/project when it comes in case of creating a path in /etc/environment ? Why it happened to be like this? I am a newbie and I just want to know more . Thanks for any reply .

    Read the article

  • Tired of windows. How Can Switch to Ubuntu ?

    - by Noah
    Im thinking strongly about starting to use Ubuntu. I am tired of windows and how every computer i get/have runs with it. I'm not real good with computers, but I'm good enough for my needs. Ive used Ubuntu in the past of a friends computer and always pretty impressed with what he could make it do, look like, and the fact that was able to run windows along with it. His computer never crashed or got viruses. That sounds like something i could get used to. Id like to here what some of you think.Also I keep reading about a live cd that allows you to run Ubuntu with windows to basically try it and not have to commit. If so how can I get that. i havent been able to find it anywhere.

    Read the article

  • Unit testing - getting started

    - by higgenkreuz
    I am just getting started with unit testing but I am not sure if I really understand the point of it all. I read tutorials and books on it all, but I just have two quick questions: I thought the purpose of unit testing is to test code we actually wrote. However, to me it seems that in order to just be able to run the test, we have to alter the original code, at which point we are not really testing the code we wrote but rather the code we wrote for testing. Most of our codes rely on external sources. Upon refactoring our code however, even it would break the original code, our tests still would run just fine, since the external sources are just muck-ups inside our test cases. Doesn't it defeat the purpose of unit testing? Sorry if I sound dumb here, but I thought someone could enlighten me a bit. Thanks in advance.

    Read the article

  • How to auto-scan any plugged in usb storage device with clamav?

    - by ossi
    I'd like to do an automatic virus scan on any plugged in usb device using ClamAV. I'm using Ubuntu 12.04. The closest thing I found was: Run clamav on mount of flashdrive How to run a shell script when a new USB storage device is detected? The first one is not working for me and the second one seems to target a known device. Is there a tutorial around I've missed? Or can I get some help with udev rules that apply to any usb storage device added? Currently nothing seems to do anything.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >