Search Results

Search found 16386 results on 656 pages for 'relative path'.

Page 197/656 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Ubuntu 10.04 - unable to install Arduino

    - by Newbie
    Hello! At the moment, I try to install Arduino on my Ubuntu 10.04 (32 Bit) computer. I downloaded the latest release at http://arduino.cc/en/Main/Software, cd'ed to the directory and unziped the package. When I try to run ./arduino , I get following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Base.main(Base.java:112) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(Unknown Source) ... 1 more Here is my java -version output: java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.5) (6b20-1.9.5-0ubuntu1~10.04.1) OpenJDK Server VM (build 19.0-b09, mixed mode) Any suggestions on this? I try to install arduino without the 'arduino' package. I tried to install it with apt-get (sudo apt-get install arduino). When I try to start arduino (using arduino command) will cause following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Preferences.load(Preferences.java:553) at processing.app.Preferences.load(Preferences.java:549) at processing.app.Preferences.init(Preferences.java:142) at processing.app.Base.main(Base.java:188) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(PApplet.java:224) ... 4 more Update: I saw that I installed several versions of jre (sun and open). So I uninstalled the open jre. Now, when calling arduino I get a new error: java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path thrown while loading gnu.io.RXTXCommDriver Exception in thread "main" java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at gnu.io.CommPortIdentifier.<clinit>(CommPortIdentifier.java:123) at processing.app.Editor.populateSerialMenu(Editor.java:965) at processing.app.Editor.buildToolsMenu(Editor.java:717) at processing.app.Editor.buildMenuBar(Editor.java:502) at processing.app.Editor.<init>(Editor.java:194) at processing.app.Base.handleOpen(Base.java:698) at processing.app.Base.handleOpen(Base.java:663) at processing.app.Base.handleNew(Base.java:578) at processing.app.Base.<init>(Base.java:318) at processing.app.Base.main(Base.java:207)

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • Sitefinity SimpleImageSelector to return Url of image instead of Guid

    - by Joey Brenn
    It's been quite a while but I've found something to blog about!I've been working with Sitefinity for some time now and one of the things that I've struggled with, and I'm not the only one is something that should be simple.  See, all I want to do is be able to choose a picture from one of the libraries within Sitefinity and be able to display it via the GUID it returns or the path of the URL.  I want to do this from my user control or a custom control.Well, it turns out that this is not built in, at least I've not been able to get anything working correctly until I found this post and was able to get it to work.  However, I want to store the relative URL of the image so I made a small change to make it return the URL instead of the GUID.To make the change, in the SimpleImageSelectorDialog.js file, on line 43, change the original line:var selectedValue = this.get_imageSelector().get_selectedImageId();to the new line:var selectedValue = this.get_imageSelector().get_selectedImageUrl();var selectedValue = this.get_imageSelector().get_selectedImageUrl();Of course, save and recomple the project and now it will return the URL instead of the GUID of the image from the choosen Album.

    Read the article

  • How do I enable JPEG Support for PHP?

    - by ngache
    My Configure Command doesn't say anything about jpg, nor gif/png, but I can see gif/png support in the output of phpinfo(). I built PHP with --with-gd, but only GIF Support and PNG Support are in the output of phpinfo(), how do I enable JPEG Support? UPDATE I got this problem when compiling : Sorry, I cannot run apxs. Possible reasons follow: 1. Perl is not installed 2. apxs was not found. Try to pass the path using --with-apxs2=/path/to/apxs 3. Apache was not built using --enable-so (the apxs usage page is displayed) The output of /usr/local/apache2/bin/apxs follows: cannot open /usr/local/apache2/build/config_vars.mk: No such file or directory at /usr/local/apache2/bin/apxs line 218. What should I do now?

    Read the article

  • Partnering with your Applications – The Oracle AppAdvantage Story

    - by JuergenKress
    So, what is Oracle AppAdvantage? A practical approach to adopting cloud, mobile, social and other trends A guided path to aligning IT more closely with business objectives Maximizing the value of existing investments in applications A layered approach to simplifying IT, building differentiation and bringing innovation All of the above? Enhance the value of your existing applications investment with #Oracle #AppAdvantage Aligning biz and IT expectations on Simplifying IT, building Differentiation and Innovation #AppAdvantage Adopt a pace layered approach to extracting biz value from your apps with #AppAdvantage Bringing #cloud, #social, #mobile to your apps with #Oracle #AppAdvantage Embracing Situational IT In the next IT Leaders Editorial, Rick Beers discusses the necessity of IT disruption and #AppAdvantage. Rick Beers sheds light on the Situational Leadership and the path to success #AppAdvantage. Rick Beers draws parallels with CIO’s strategic thinking and #Oracle #AppAdvantage approach. Do you have this paper in your summer reading list? Aligning biz and IT #AppAdvantage What does Situational leadership have to do with Oracle AppAdvantage? Catch the next piece in Rick Beers’ monthly series of IT Leaders Editorial and find out. #AppAdvantage Middleware Minutes with Howard Beader – August edition In the quarterly column, @hbeader discusses impact of #cloud, #mobile, #fastdata on #middleware Making #cloud, #mobile, #fastdata a part of your IT strategy with #middleware What keeps the #oracle #middleware team busy? Find out in the inaugural post in quarterly update on #middleware Recent #middleware news update along with a preview of things to come from #Oracle, in @hbeader ‘s quarterly column In his inaugural post, Howard Beader, senior director for Oracle Fusion Middleware, discusses the recent industry trends including mobile, cloud, fast data, integration and how these are shaping the IT and business requirements. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • configuration transfer over scp on commit not working on Juniper EX-2200 switch

    - by liv2hak
    I am making a series of configuration changes on Junos EX- 2200 switch.I have this router connected to another PC via an ethernet cable.The IP address of the switch is 192.168.1.1.I am able to ping from 192.168.1.1 to 192.168.1.0 and vice-versa. After the changes I make I do the following commands set system archival configuration transfer-on-commit set system archival configuration archive-sites "scp://[email protected]:/home/karthik/ws_karthik/sw1_config_1.txt" password godfather commit Where there is a user with user-name "karthik " and password "godfather".The path shown above also exists in the system How ever I don't see the configuration file sw1_config_1.txt created at the path specified. Also I have verified that sshd is running on the PC (192.168.1.10) Am I doing something wrong here? It would be great if anyone could help me out.

    Read the article

  • How to customize email FROM header with an email from a different domain ?

    - by user40763
    How can I customize the mail FROM header in our Email Marketing Application , to enable our customers to specify their OWN email ( from their domain ) . Currently the customer specify his own domain and we use it at the Reply-To mail's header. CURRENTLY From: [email protected] Reply-To: customer_email@customer_domain.com Return-Path: [email protected] WHAT WE NEED From: customer_email@customer_domain.com Reply-To: customer_email@customer_domain.com Return-Path: [email protected] We do it this way to avoid getting blacklisted because Mail Servers like Gmail or Hotmail would considers it as a MAIL'S HEADER FORGERY ATTEMPT. But our customers keeps asking us to make the FROM HEADER customizable. Can someone help us ?

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

  • Problem closing MDI child window in Terminal Services/Remote Desktop Connection 7.0

    - by Justin Love
    I have one user whose computer just got updated to the 7.0 Remote Desktop Connection. Concurrently, she has started having a problem closing the MDI child windows in an old FoxPro application running on the remote server. We have two different servers, both 2003, running the same application, one locally and one at a remote office. Only the remote office server is giving trouble. It works fine for me, even when logging into her TS account. No other users have complained. The other day the same user experienced an error message (path not found for a path showing a localization placeholder) starting the RDC, fixed by reboot. I suspect she may have had RDC running during the 7.0 upgrade.

    Read the article

  • Aptana Under linux

    - by fatnjazzy
    Hey, I downloaded the Aptanastudio 2.0 and unzipped it in the desktop. Im trying to run Aptana studio 2.0 under OpenSuse 11 and i get the following error... Any idea y? Thanks JVM terminated. Exit code=-1 -Xms40m -Xmx384m -Djava.awt.headless=true -XX:MaxPermSize=256m -Djava.class.path=/home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar -os linux -ws gtk -arch x86 -showsplash -launcher /home/avi/Desktop/Aptana Studio 2.0/AptanaStudio -name AptanaStudio --launcher.library /home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.0.200.v20090520/eclipse_1206.so -startup /home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar -application com.aptana.ide.desktop.integration.Application -vm /usr/lib/jvm/java-1.6.0-openjdk-1.6.0/jre/bin/../lib/i386/client/libjvm.so -vmargs -Xms40m -Xmx384m -Djava.awt.headless=true -XX:MaxPermSize=256m -Djava.class.path=/home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar

    Read the article

  • What do neglected O'Reilly book topics tell us about that topic?

    - by Peter Turner
    Does anybody know how O'Reilly chooses topics to publish? For some reason, I don't see how it can be based on demand. The reason, I ask, is because they haven't published a Delphi book in almost 12 years and Object Pascal is at least as esoteric as Erlang and as practical as PHP and as robust as C++. So, maybe someone knows what rationale is behind O'Reilly's publishing methodology or what it is supposed to tell us about the relative popularity or usefulness of any given language or programming technique? Oh, I forgot about pig and robotlegs

    Read the article

  • PowerShell Try Catch Finally

    - by PointsToShare
    PowerShell Try Catch Finally I am a relative novice to PowerShell and tried (pun intended) to use the “Try Catch Finally” in my scripts. Alas the structure that we love and use in C# (or even – shudder of shudders - in VB) does not always work in PowerShell. It turns out that it works only when the error is a terminating error (whatever that means). Well, you can turn all your errors to the terminating kind by simply setting - $ErrorActionPreference = "Stop", And later resetting it back to “Continue”, which is its normal setting. Now, the lazy approach is to start all your scripts with: $ErrorActionPreference = "Stop" And ending all of them with: $ErrorActionPreference = "Continue" But this opens you to trouble because should your script have an error that you neglected to catch (it even happens to me!), your session will now have all its errors as “terminating”. Obviously this is not a good thing, so instead let’s put these two setups in the beginning of each Try block and in the Finally block as seen below: That’s All Folks!!

    Read the article

  • Mercurial hgwebdir configuration URL

    - by Jonathan Sternberg
    I'm setting up an hgwebdir configuration for the first time with Mercurial on apache2. I can see the three repositories I've set up in the first page, and I've figured out how to modify their names so they don't resemble the directory path. But when I click to go to one of the repositories, the URL becomes http://localhost/hg/hgweb.cgi/path/to/repos. I would like the directory to be http://localhost/hg/name instead as that is easier to remember for people who want to clone the repository. Is there anyway to do that with hgwebdir?

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • TFS 2010 Build: Dealing with the API restriction error

    - by Jakob Ehn
    Recently I’ve come across this error a couple of times when running builds that exeucte unit tests using Test containers: API restriction: The assembly 'file:///C:\Builds\<path>\myassembly.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain. Every time I’ve got this error, the project has been a web application, and the path to the assembly points down to the _PublishedWebsites directory that is created beneath the Binaries folder during a team build. The error description really says it all (although slightly cryptic), when using test containers, MSTest needs to load all assemblies and see if they contain any unit tests. During this serach, it finds the ‘myassembly.dll’ in two different locations. First it is found directly beneth the Binaries folder, and then it is alos found beneath the _PublishedWebsites\Project\bin folder. The reason is that the default setting for test containers in a TFS 2010 build definition is **\*test*.dll:   This pattern means that MSTest will search recursively for all assemblies beneath the Binaries folder, and during the search it will find the MyAssembly.dll twice. The solution is simple, set the Test assembly file specification property to *test*.dll instead, this will disable the recursive search:

    Read the article

  • Continuing permissions issues - ASP.net, IIS 7, Server 2008 - 0x80070005 (http 500.19) error

    - by Re-Pieper
    I created an ASP.net MVC developed web application and I am trying to set up IIS. The Error: Http error 500.19, error code 0x80070005, Cannot read configuration file due to insufficient permissions, config file: C:\inetpub\wwwroot\BudgetManagerMain\BudgetManager\web.config If I set the AppPool to use 'administrator' i have no problems and can access the site just fine. If i set to NETWORK SERVICE (or anything else including self-created admin or non-admin user accounts), i get the above error. Things I have tried: identity for Application pool named 'test' is 'NetworkService' Set full access privs for wwwroot and all children files/folders verified effective permissions and NETWORK SERVICE has full access. Authentication on my site is set for anonymous and running under Application Pool Identity I do not have any physical path credentials set on the website confirmed website is set to run under the application pool named 'test' using Process Monitor, here is a summary of what i found on the ACCESS DENIED event EVENT TAB: Class: File System Operation: CreateFile Result: Access Denied Path: ..\web.config Desired Access: Generic Read Disposition: Open Options: Sybnchronous IO Non-Alert, Non-Directory file Attributes: N ShareMode: Read AllocaitonSize: n/a PROCESS TAB ...lots of stuff that seems irrelevant User: NT AUTHORITY\NETWORK SERVICE

    Read the article

  • Ever helpful Windows&hellip;

    - by John Breakwell
    I’m doing some troubleshooting for a relative and asked them to send me a zipped copy of their registry which they dutifully did. When I tried to extract the registry file, though, Windows jumped in the way and said “No”. This made sense as registry files are dangerous things in the hands of the ignorant. So I clicked the link to see if it would tell me how to get at the reg file but found the result less than helpful. So off to the Internet and found an excellent answer on how to get round this: Now on to the much harder part of fixing the original problems.

    Read the article

  • Adding an ASP website in IIS7.5 on Windows 7

    - by birdus
    enter preformatted text hereI'm trying to add an ASP website under IIS 7.5 on Windows 7 and am having no luck so far. This site is just for me to hit locally. I need to make some changes to some of the HTML in some of the ASP files and I just need to be able to test my changes as I make them. I installed IIS and checked the box for ASP. Next, I added an Application Pool which I called ASP and which has "No Managed Code" and "ASP" set. Next, I added the website by right-clicking "Sites" then clicking "Add Web Site...". I gave it a name, set it to use the ASP app pool, pointed it to the path where the ASP code is (I left it at pass-through authentication), and typed in 5555 as the port, so as to not interfere with the default website. The code is sitting on my server and the path simply uses the mapped drive that I always use to access files on that drive array. When I type in http://mysite:5555, I get "could not find mysite:5555". I don't really know if all these settings are correct or what else I should try. What am I missing? Thanks, Jay

    Read the article

  • Keeping crosshairs & GUI onscreen - SFML

    - by nihohit
    I read this question, but didn't understand the implementation suggestions with SFML on c#. For example, right now I'm just trying to make sure that the mouse crosshairs stay onscreen constatnly. I tried using this code: View lastView = this._mainWindow.GetView(); this._mainWindow.SetView(this._mainWindow.DefaultView); this._mainWindow.Draw(crosshair); this._mainWindow.SetView(lastView); after drawing all other sprites and before call this._mainWindow.display(), when beforehand I set crosshair.Position based on its position relative to the window, not the view. This just keeps the screen locked and prevents screen scrolling. Any suggestions?

    Read the article

  • Adding multiple gradients to object in Adobe Illustrator

    - by Vass
    Hi, I have an object which is a path (a nose to be specific). Now I want both a linear gradient and a radial gradient to be added to the object. So these must be separate gradient objects I guess, and I can't find a way to add multiple separate gradients to a complete path so do I duplicate the object and then apply a new gradient to each object? And what would the layer transparency features look like? Would the 'normal' overlay of the layers work? I am afraid of multiple shadows creating double dark regions, but maybe that is as its supposed to be if you think in terms of classical art and draw shadows in terms of each light obstruction.

    Read the article

  • Determine web page draw time via a program

    - by Kevin Burke
    Google Chrome has a nice tool to determine the time the page begins drawing, in the Network tab in Developer Tools. Similarly sites like webpagetest.org can tell you the draw time and give you the whole waterfall of page loads for a given web page. I was wondering if I could automate the process of finding the time it took to the first page draw, for all of the pages on my site, so I can share this data within my company. Obviously the page draw time will depend on the latency and throughput of your connection, but I'm more concerned with the relative data about pages on our site. Can I get this data from Selenium or another tool? Thanks, Kevin

    Read the article

  • Using the option port on my Watchguard Firebox as a 2nd gateway exit point?

    - by Donovan
    I'm working on a network project in witch I have to design our network to provide two different exit points. The points are differentiated by the path through the corporate network. One of them travels through some monitoring hardware the other does not. We have a Watchguard Firebox in use as our gateway. Currently the network side provides the unmonitored exit point. I was wondering if i hooked the option port to our lan at a point that would force traffic through the monitored path, would it cause any problems? Access to the unmonitored gateway port would be restricted by ip. That would force all others not authorized to point to the monitored gateway port. I thought with the above design i might be able to get away with not having to buy another firebox to achieve the design I want. Thanks, D

    Read the article

  • Find and hide file extension

    - by Daveo
    I am trying to find all files that have the same filename (excluding the file extension) that occur 3 times. I also need the full path to the file. What I have currently is #get file without extension alias lse="ls -1R | sed -e 's/\.[a-zA-Z]*$//'" #print out the current dir and get files occuring 3 times lse | sed "s;^;`pwd`/;" | sort | uniq -c | grep " 3 " This runs howver pwd prints the folder I ran the command from not the path to the file. So I tried find find . -type f | sed "s#^.#$(pwd)#" | sort | uniq -c This runs but includes the file extension. When I try to add sed -e 's/\.[a-zA-Z]*$//'" I get errors as I am not sure how to combine the two sed commands and I cannot seem to pipe a second time to sed? so what I am trying to do is find . -type f | sed "s#^.#$(pwd)#" | sed -e 's/\.[a-zA-Z]*$//'"| sort | uniq -c | grep " 3 " but this does not run.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >