Search Results

Search found 14454 results on 579 pages for 'unc path'.

Page 176/579 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Learn to Create Applications Using MySQL with MySQL for Developers Course

    - by Antoinette O'Sullivan
    If you are a database developer who wants to create applications using MySQL, then the MySQL for Developers course is for you. This course covers how to plan, design and implement applications using the MySQL database with realistic examples in Java and PHP. To see more details of the content of the MySQL for Developers course, go to http://oracle.com/education/mysql and click on the Learning Paths tab and select the MySQL Developer path. You can take this course as a: Live-Virtual Event: Follow this live instructor-led event from your own desk - no travel required. Choose from a selection of events on the calendar in languages such as English, German and Korean. In-Class Event: Travel to an education center to take this class. Below is a sample of events on the schedule.  Location  Date  Language  Vienna, Austria  4 March 2013  German  London, England  4 March 2013  English  Gummersbach, Germany  11 February 2013  Germany  Hamburg, Germany  14 January 2013  Germany  Munich, Germany  15 April 2013  Germany  Budapest, Hungary  15 April 2013  Hungarian  Milan, Italy  21 January 2013  Italy  Rome, Italy  11 March 2013  Italy  Amsterdam, Netherlands  28 January 2013  Dutch  Nieuwegein, Netherlands  13 May 2013  Dutch  Lisbon, Portugal  18 February 2013  European Portugese  Porto, Portugal  18 February 2013  European Portugese  Barcelona, Spain  18 February 2013  Spanish  Madrid, Spain  28 January 2013  Spanish  Bern, Switzerland  11 April 2013  German  Zurich, Switzerland  11 April 2013  German  Nairobi, Kenya  21 January 2013  English  Petaling Jaya, Malaysia  17 December 2012  English  Sao Paulo, Brazil  11 March 2013  Brazilian Portugese For more information on this class or other courses on the authentic MySQL curriculum, or to express your interest in additional events, go to http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

  • How to block some disks from probes on Linux boot?

    - by Igor Velkov
    My linux host connected to SAN with FC interface. It connect with one path, and see some luns, that can't access, because they need anohter path, not available to host. On boot linux probe all lun he can see, get read error on unaccessible luns, and hangs there for a long-long time. Is there a way to disable any access to some luns at boot time, and later? I found a filters for device ignoration for LVM and MULTIPATH, but it not help during boot process. Generally, lvm still affected too despite of filter, and gives me a IO error on every operation like lvdisplay and vgdisplay, but this is another question.

    Read the article

  • Ionics Rewrite Filter setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • How do I fix the error: CS1548: Cryptographic failure while signing assembly ?

    - by Paula DiTallo
    The full error in Microsoft Visual Studio on a compile looks like this: error CS1548: Cryptographic failure while signing assembly 'C:\Program Files\Microsoft SQL Server\100\Samples\Analysis Services\Programmability\AMO\AMOAdventureWorks\CS\StoredProcedures\obj\Debug\StoredProcedures.dll' This is likely due to a missing strong key pair value file. The easiest way to solve this problem is to create a new one. Navigate to:  Microsoft Visual Studio 2010>Visual Studio Tools>Visual Studio x64 Win64 Command Prompt (2010)  [if you aren't on an x64 box, pick another command prompt option that fits] Once the MS-Dos window displays, type in this statement: sn -k c:\SampleKey.snk Then copy the output *.snk file to project directory, or the *referenced directory. Remove the old reference to the *.snk file from the project. Add the paired key back to the project as an existing item. When you add back the *.snk file to the project, you will see that the *.snk file is no longer missing.   Our work is done!   *referenced directory: Pay attention to the original error message on compile. The *.snk file that is referenced may be in a directory path you aren't expecting--so you will still get the error unless you change the directory path or write the file to the directory the program is expecting to find the *.snk file.

    Read the article

  • A Gentle .NET touch to Unix Touch

    - by lavanyadeepak
    A Gentle .NET touch to Unix Touch The Unix world has an elegant utility called 'touch' which would modify the timestamp of the file whose path is being passed an argument to  it. Unfortunately, we don't have a quick and direct such tool in Windows domain. However, just a few lines of code in C# can fill this gap to embrace and rejuvenate any file in the file system, subject to access ACL restrictions with the current timestamp.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace LavanyaDeepak.Utilities { class Touch { static void Main(string[] args) { if (args.Length < 1) { Console.WriteLine("Please specify the path of the file to operate upon."); return; } if (!File.Exists(args[0])) { try { FileAttributes objFileAttributes = File.GetAttributes(args[0]); if ((objFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { Console.WriteLine("The input was not a regular file."); return; } } catch { } Console.WriteLine("The file does not seem to be exist."); return; } try { File.SetLastWriteTime(args[0], DateTime.Now); Console.WriteLine("The touch completed successfully"); } catch (System.UnauthorizedAccessException exUnauthException) { Console.WriteLine("Unable to touch file. Access is denied. The security manager responded: " + exUnauthException.Message); } catch (IOException exFileAccessException) { Console.WriteLine("Unable to touch file. The IO interface failed to complete request and responded: " + exFileAccessException.Message); } catch (Exception exGenericException) { Console.WriteLine("Unable to touch file. An internal error occured. The details are: " + exGenericException.Message); } } } }

    Read the article

  • Dreamweaver not loading due to workspace file problem

    - by Lynda
    I went to launch Dreamweaver CS 5.5 and this message popped up: XML parsing fatal error: Invalid document structure, line 1, file C:\Documents...(file path)...Workspace\My Workspace.xml It was followed by The following panel layout is missing or could not be read: C:...My Workspace.xml The application will not have a correct layout. Please load one from WindowsWorkspace After that, Dreamweaver acted as if it was going to load, but never did. When I tried to close the program, it crashed. I followed the file path and I saw two files: My Workspace.xml 0kb and My Workspace 5kb. The second one has an unknown file type. I deleted the first file and renamed the unknown file type to My Workspace.xml; everything worked fine after that point. Why did Dreamweaver do this? This has happened several times, but I have not changed anything that should affect that file type.

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Can't connect to DeploymentShare$ from PC attempting to MDT, but can other PCs on the network

    - by Moman10
    I am in the process of setting up MDT and have run across a problem. MDT is installed on a Windows 2012 server, MDT version 6.2.5019.0. Using WDS as well. Active Directory domain, the server is up to date and on the network. I boot up the PC, it gets an address from DHCP, pulls down the LiteTouchPE_x64.wim image and goes into the MS Solution Accelerators screen, the Processing Bootstrap Settings box comes up and processes for a couple of seconds, then goes away, it sits there for another minute or so and then gives the error: A connection to the deployment share (\\Acme-MDT\DeploymentShare$) could not be made. Can not reach the DeployRoot. Possible Cause: Network Routing error or Network Configuration Error." I can then retry or cancel. I have seen this error online but so far nothing that helps fix it, but seems to be an issue with the FQDN. I verified that I am getting an IP address and that I can successfully ping the MDT server if I use the FQDN, but can not just by it's A record of Acme-MDT. I tried manually mapping the network share using net use and it works if I use the FQDN, but it fails with an error code 53, "Network path not found" if I just use the A record of Acme-MDT. Here is the net use command I'm using: net use * \\Acme-MDT\DeploymentShare$ /u:Domain\Administrator It gives the error System Error 53, Network path not found (and doesn't prompt for a password), but if I use the FQDN of \\Acme-MDT.domain.com\DeploymentShare$ it works fine to map the drive. I guess the problem is, when it tries to load the image, it is trying to start from \\Acme-MDT\DeploymentShare$ and I need it to start from \\Acme-MDT.domain.com\DeploymentShare$, but not sure how to get it to do that. I've put the fully qualified path in CustomSettings.ini and bootstrap, updated the deployment share, regenerated the boot image and replaced the boot wim in WDS. Or, if someone has an idea as to why it's acting this way and knows a way around it. The end result is what matters! :) I did verify in DNS that Acme-MDT is there, with the proper IP, and I can successfully use the net use command to map this drive from a couple other computers that are already on the network. I am assuming it has something to do with that computer not already being part of the domain, but I'm honestly at a loss as to how to fix it. Any ideas are appreciated, thanks in advance for your help!

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • How to solve CUDA crash when run CUDA example fluidsGL?

    - by sam
    I use ubuntu 12.04 64 bits with GTX560Ti. I install CUDA by following instruction: wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/toolkit/cudatoolkit_4.2.9_lin ux_64_ubuntu11.04.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/drivers/devdriver_4.2_linux _64_295.41.run wget http: //developer.download.nvidia.com/compute/cuda/4_2/rel/sdk/gpucomputingsdk_4.2.9 _linux.run chmod +x cudatoolkit_4.2.9_linux_64_ubuntu11.04.run sudo ./cudatoolkit_4.2.9_linux_64_ubuntu11.04.run echo "/usr/local/cuda/lib64" > ~/cuda.conf echo "/usr/local/cuda/lib" >> ~/cuda.conf sudo mv ~/cuda.conf /etc/ld.so.conf.d/cuda.conf sudo ldconfig echo 'export PATH=$PATH:/usr/local/cuda/bin' >> ~/.bashrc chmod +x gpucomputingsdk_4.2.9_linux.run ./gpucomputingsdk_4.2.9_linux.run sudo apt-get install build-essential libx11-dev libglu1-mesa-dev freeg lut3-dev libxi-dev libxmu-dev gcc-4.4 g++-4.4 sed 's/g++ -fPIC/g++-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/gcc -fPIC/gcc-4.4 -fPIC/g' ~/NV IDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib/-L$(SHAREDDIR)\/lib -L\/u sr\/lib\/nvidia-current/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk sed 's/-L$(SHAREDDIR)\/lib -L\/usr\/lib\/nvidia-current $(NV CUVIDLIB)/-L$(SHAREDDIR)\/lib $(NVCUVIDLIB)/g' ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk > ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak; mv ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk.bak ~/NVIDIA_GPU_Computing_SDK/C/common/common.mk After I run ~/NVIDIA_GPU_Computing_SDK/C/bin/linux/release/./fluidsGL It got stuck even mouse or keyboard couldn't move. How to solve it? Thank you~

    Read the article

  • how do I get vim home directory?

    - by nsharish
    I wanted to set VIMHOME variable this way(common to windows and linux), let $VIMHOME=expand("%:p")."/..", so that VIMHOME is "~/.vim" in linux or "path/to/vimfiles" in windows. I put this in a var.vim file and placed this in the plugin directory. It loads properly, but VIMHOME is set only to "./..". How do I get the full path of a file using expand? Is there an easy way to set VIMHOME?

    Read the article

  • PHP+Apache as forward/reverse proxy: ¿how to process client requests and server responses in PHP?

    - by Lightworker
    Hi! I'm having a lot of troubles with the propper configuration of Apache mod_proxy.so to work as desired... The main idea, is to create a proxy on a local machine in a network wich will have the ability to proces a client request (client connected through this Apache prepared proxy) in PHP. And also, it will have the capacity to process the server responses on PHP too. Those are the 2 funcionalities, and they are independent one from each other. Let me present a little schema of what I need to achive: As you can see here, there're 2 ways: blue one and red one. For the blue one, I basically conected a client (Machine B - cell phone) on my local network (home) and configured it to go thorugh a proxy, wich is the Machine A (personal computer) on the exactly same network. So let's say (not DHCP): Machine A: 192.168.1.40 -- Apache is running on this machine, and configured to listen port 80. Machine B (cell phone): 192.168.1.75 -- configured to go throug a proxy, wich is IP 192.168.1.75 and port 80 (basically, Machine A). After configuring Apache properly, wich is basically to remove the "#" from httpd.conf on the lines for the mod_proxy.so (main worker), mod_proxy_connect.so (SSL, allowCONNECT, ...) and mod_proxy_http.so (needed for handle HTTP request/responses) and having in my case, lines like this: # Implements a proxy/gateway for Apache. Include "conf/extra/httpd-proxy.conf" # Various default settings Include "conf/extra/httpd-default.conf" # Secure (SSL/TLS) connections Include "conf/extra/httpd-ssl.conf" wich gives me the ability to configure the file httpd-proxy.conf to prepare the forward proxy or the reverse proxy. So I'm not sure, if what I need it's a forward proxy or a reverse one. For a forward proxy I've done this: <IfModule proxy_module> <IfModule proxy_http_module> # # FORWARD Proxy # #ProxyRequests Off ProxyRequests On ProxyVia On <Proxy *> Order deny,allow # Allow from all Deny from all Allow from 192.168.1 </Proxy> </IfModule> </IfModule> wich basically passes all the packets normally to the server and back to the client. I can trace it perfectly (and testing that works) looking at the "access.log" from Apache. Any request I make with the cell phone, appears then on the Apache log. So it works. But here come the problem: I need to process those client requests. And I need to do it, in PHP. I have read a lot about this. I've read in detail the oficial site from Apache about mod_proxy. And I've searched a lot on forums, but without luck. So I thought about a first aproximation: 1) Forward proxy in Apache, passes all the packets and it's not possible to process them. This seems to be true, so, what about a reverse proxy? So I envisioned something like: ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass http://www.google.com http://www.yahoo.com ProxyPassReverse http://www.google.com http://www.yahoo.com which is just a test, but this should cause on my cell phone that when trying to navigate to Google, I should be going to Yahoo, isn't it? But not. It doesn't work. So you really see, that ALL the examples on Apache reverse proxy, goes like: ProxyPass /foo http://foo.example.com/bar ProxyPassReverse /foo http://foo.example.com/bar wich means, that any kind of request in a local context, will be solved on a remote location. But what I needed is the inverse! It's that when asking for a remote site on my phone, I solve this request on my local server (the Apache one) to process it with a PHP module. So, if it's a forward proxy, I need to pass through PHP first. If it's a reverse proxy, I need to change the "going" direction to my local server one to process first on PHP. Then comes in mind second option: 2) I've seen something like: <Proxy http://example.com/foo/*> SetOutputFilter INCLUDES </Proxy> And I started to search for SetOutputFilter, SetInputFilter, AddOutputFilter and AddInputFilter. But I don't really know how can I use it. Seems to be good, or a solution to me, cause with somethin' like this, I should can add an Input filter to process on PHP the client requests and send back to the client what I programed/want (not the remote server response) wich is the BLUE path on schema, and I should have the ability to add an Output filter wich seems to give me the ability to process the remote server response befor sending it to the client, wich should be the RED path on the schema. Red path, it's just to read server responses and play with em. But nothing more. The Blue path, it's the important one. Cause I will send to the client whatever I want after procesing the requests. I so sorry for this amazingly big post, but I needed to explain it as well as I can. I hope someone will understand my problem, and will help me to solve it! Lot of thanks in advance!! :)

    Read the article

  • Tomcat 6 IP restrictions

    - by KB22
    I need to protect a certain folder within a web application of mine from access from outside of an defined IP range. With O'Reilly's Tomcat Tips I figured that: <Context path="/path/to/secret_files" ...> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127.0.0.1" deny=""/> </Context> Is the way to go? I'm not that much into tomcat configuration so I'm dazzled a little as to where to put these restrictions. Do I put this Within my web.xml or is this a thing I need to add to some general tomcat conf file?

    Read the article

  • Partnering with your Applications – The Oracle AppAdvantage Story

    - by JuergenKress
    So, what is Oracle AppAdvantage? A practical approach to adopting cloud, mobile, social and other trends A guided path to aligning IT more closely with business objectives Maximizing the value of existing investments in applications A layered approach to simplifying IT, building differentiation and bringing innovation All of the above? Enhance the value of your existing applications investment with #Oracle #AppAdvantage Aligning biz and IT expectations on Simplifying IT, building Differentiation and Innovation #AppAdvantage Adopt a pace layered approach to extracting biz value from your apps with #AppAdvantage Bringing #cloud, #social, #mobile to your apps with #Oracle #AppAdvantage Embracing Situational IT In the next IT Leaders Editorial, Rick Beers discusses the necessity of IT disruption and #AppAdvantage. Rick Beers sheds light on the Situational Leadership and the path to success #AppAdvantage. Rick Beers draws parallels with CIO’s strategic thinking and #Oracle #AppAdvantage approach. Do you have this paper in your summer reading list? Aligning biz and IT #AppAdvantage What does Situational leadership have to do with Oracle AppAdvantage? Catch the next piece in Rick Beers’ monthly series of IT Leaders Editorial and find out. #AppAdvantage Middleware Minutes with Howard Beader – August edition In the quarterly column, @hbeader discusses impact of #cloud, #mobile, #fastdata on #middleware Making #cloud, #mobile, #fastdata a part of your IT strategy with #middleware What keeps the #oracle #middleware team busy? Find out in the inaugural post in quarterly update on #middleware Recent #middleware news update along with a preview of things to come from #Oracle, in @hbeader ‘s quarterly column In his inaugural post, Howard Beader, senior director for Oracle Fusion Middleware, discusses the recent industry trends including mobile, cloud, fast data, integration and how these are shaping the IT and business requirements. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to properly add .NET assemblies to Powershell session?

    - by amandion
    I have a .NET assembly (a dll) which is an API to backup software we use here. It contains some properties and methods I would like to take advantage of in my Powershell script(s). However, I am running into a lot of issues with first loading the assembly, then using any of the types once the assembly is loaded. The complete file path is: C:\rnd\CloudBerry.Backup.API.dll In Powershell I use: $dllpath = "C:\rnd\CloudBerry.Backup.API.dll" Add-Type -Path $dllpath I get the error below: Add-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. At line:1 char:9 + Add-Type <<<< -Path $dllpath + CategoryInfo : NotSpecified: (:) [Add-Type], ReflectionTypeLoadException + FullyQualifiedErrorId : System.Reflection.ReflectionTypeLoadException,Microsoft.PowerShell.Commands.AddTypeComma ndAdd-Type : Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Using the same cmdlet on another .NET assembly, DotNetZip, which has examples of using the same functionality on the site also does not work for me. I eventually find that I am seemingly able to load the assembly using reflection: [System.Reflection.Assembly]::LoadFrom($dllpath) Although I don't understand the difference between the methods Load, LoadFrom, or LoadFile that last method seems to work. However, I still seem to be unable to create instances or use objects. Each time I try, I get errors that describe that Powershell is unable to find any of the public types. I know the classes are there: $asm = [System.Reflection.Assembly]::LoadFrom($dllpath) $cbbtypes = $asm.GetExportedTypes() $cbbtypes | Get-Member -Static ---- start of excerpt ---- TypeName: CloudBerryLab.Backup.API.BackupProvider Name MemberType Definition ---- ---------- ---------- PlanChanged Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.ChangedEventArgs] PlanChanged(Sy... PlanRemoved Event System.EventHandler`1[CloudBerryLab.Backup.API.Utils.PlanRemoveEventArgs] PlanRemoved... CalculateFolderSize Method static long CalculateFolderSize() Equals Method static bool Equals(System.Object objA, System.Object objB) GetAccounts Method static CloudBerryLab.Backup.API.Account[], CloudBerry.Backup.API, Version=1.0.0.1, Cu... GetBackupPlans Method static CloudBerryLab.Backup.API.BackupPlan[], CloudBerry.Backup.API, Version=1.0.0.1,... ReferenceEquals Method static bool ReferenceEquals(System.Object objA, System.Object objB) SetProfilePath Method static System.Void SetProfilePath(string profilePath) ----end of excerpt---- Trying to use static methods fail, I don't know why!!! [CloudBerryLab.Backup.API.BackupProvider]::GetAccounts() Unable to find type [CloudBerryLab.Backup.API.BackupProvider]: make sure that the assembly containing this type is load ed. At line:1 char:42 + [CloudBerryLab.Backup.API.BackupProvider] <<<< ::GetAccounts() + CategoryInfo : InvalidOperation: (CloudBerryLab.Backup.API.BackupProvider:String) [], RuntimeException + FullyQualifiedErrorId : TypeNotFound Any guidance appreciated!!

    Read the article

  • SQL Server 2005 to 2008 upgrade - are MDF files binary compatible?

    - by james
    I have 50 databases on a MS SQL Server 2005 system and want to upgrade to MS SQL Server 2008. This is what I tried on some test machines: 1. copied the \DATA directory from the source (MSSQL 2005) to exactly the same path on the target (MSSQL 2008) server. 2. edited the startup parameters on the MSSQL 2008 service to point to the path of the MSSQL 2005 master database. 3. restarted MSSQL service It worked and I can access all databases, tables and data. My questions are: I go back to SQL Server 4.2 and it has never been this easy. I know it worked, but should have it worked? Am I missing something, or is there going to be a gotcha next week? These are simple databases, with just tables, views and indexes. No cross database links, no triggers etc

    Read the article

  • IIRF Setup on IIS 5.1

    - by Neil Aitken
    I'm trying to configure IIRF 2 on IIS 5.1 running on XP Pro, so that I can run the Zend Framework. I've managed to get the filter running on a second website that I setup using one the IIS admin scripts. When I goto iirfStatus I get this: The problem is the .ini path for the site is pointing to c:\windows\system32\Irif.ini rather than the site root. If I try creating an IIS application under IIS-Website Properties-Home Directory then iirfStatus stops working entirely. Any ideas how I can set the ini path correctly, or will I only be able to get away with this on a proper server edition of IIS?

    Read the article

  • Ubuntu 10.04 - unable to install Arduino

    - by Newbie
    Hello! At the moment, I try to install Arduino on my Ubuntu 10.04 (32 Bit) computer. I downloaded the latest release at http://arduino.cc/en/Main/Software, cd'ed to the directory and unziped the package. When I try to run ./arduino , I get following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Base.main(Base.java:112) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(Unknown Source) ... 1 more Here is my java -version output: java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.5) (6b20-1.9.5-0ubuntu1~10.04.1) OpenJDK Server VM (build 19.0-b09, mixed mode) Any suggestions on this? I try to install arduino without the 'arduino' package. I tried to install it with apt-get (sudo apt-get install arduino). When I try to start arduino (using arduino command) will cause following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Preferences.load(Preferences.java:553) at processing.app.Preferences.load(Preferences.java:549) at processing.app.Preferences.init(Preferences.java:142) at processing.app.Base.main(Base.java:188) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(PApplet.java:224) ... 4 more Update: I saw that I installed several versions of jre (sun and open). So I uninstalled the open jre. Now, when calling arduino I get a new error: java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path thrown while loading gnu.io.RXTXCommDriver Exception in thread "main" java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at gnu.io.CommPortIdentifier.<clinit>(CommPortIdentifier.java:123) at processing.app.Editor.populateSerialMenu(Editor.java:965) at processing.app.Editor.buildToolsMenu(Editor.java:717) at processing.app.Editor.buildMenuBar(Editor.java:502) at processing.app.Editor.<init>(Editor.java:194) at processing.app.Base.handleOpen(Base.java:698) at processing.app.Base.handleOpen(Base.java:663) at processing.app.Base.handleNew(Base.java:578) at processing.app.Base.<init>(Base.java:318) at processing.app.Base.main(Base.java:207)

    Read the article

  • How do I enable JPEG Support for PHP?

    - by ngache
    My Configure Command doesn't say anything about jpg, nor gif/png, but I can see gif/png support in the output of phpinfo(). I built PHP with --with-gd, but only GIF Support and PNG Support are in the output of phpinfo(), how do I enable JPEG Support? UPDATE I got this problem when compiling : Sorry, I cannot run apxs. Possible reasons follow: 1. Perl is not installed 2. apxs was not found. Try to pass the path using --with-apxs2=/path/to/apxs 3. Apache was not built using --enable-so (the apxs usage page is displayed) The output of /usr/local/apache2/bin/apxs follows: cannot open /usr/local/apache2/build/config_vars.mk: No such file or directory at /usr/local/apache2/bin/apxs line 218. What should I do now?

    Read the article

  • configuration transfer over scp on commit not working on Juniper EX-2200 switch

    - by liv2hak
    I am making a series of configuration changes on Junos EX- 2200 switch.I have this router connected to another PC via an ethernet cable.The IP address of the switch is 192.168.1.1.I am able to ping from 192.168.1.1 to 192.168.1.0 and vice-versa. After the changes I make I do the following commands set system archival configuration transfer-on-commit set system archival configuration archive-sites "scp://[email protected]:/home/karthik/ws_karthik/sw1_config_1.txt" password godfather commit Where there is a user with user-name "karthik " and password "godfather".The path shown above also exists in the system How ever I don't see the configuration file sw1_config_1.txt created at the path specified. Also I have verified that sshd is running on the PC (192.168.1.10) Am I doing something wrong here? It would be great if anyone could help me out.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • How to customize email FROM header with an email from a different domain ?

    - by user40763
    How can I customize the mail FROM header in our Email Marketing Application , to enable our customers to specify their OWN email ( from their domain ) . Currently the customer specify his own domain and we use it at the Reply-To mail's header. CURRENTLY From: [email protected] Reply-To: customer_email@customer_domain.com Return-Path: [email protected] WHAT WE NEED From: customer_email@customer_domain.com Reply-To: customer_email@customer_domain.com Return-Path: [email protected] We do it this way to avoid getting blacklisted because Mail Servers like Gmail or Hotmail would considers it as a MAIL'S HEADER FORGERY ATTEMPT. But our customers keeps asking us to make the FROM HEADER customizable. Can someone help us ?

    Read the article

  • Problem closing MDI child window in Terminal Services/Remote Desktop Connection 7.0

    - by Justin Love
    I have one user whose computer just got updated to the 7.0 Remote Desktop Connection. Concurrently, she has started having a problem closing the MDI child windows in an old FoxPro application running on the remote server. We have two different servers, both 2003, running the same application, one locally and one at a remote office. Only the remote office server is giving trouble. It works fine for me, even when logging into her TS account. No other users have complained. The other day the same user experienced an error message (path not found for a path showing a localization placeholder) starting the RDC, fixed by reboot. I suspect she may have had RDC running during the 7.0 upgrade.

    Read the article

  • Aptana Under linux

    - by fatnjazzy
    Hey, I downloaded the Aptanastudio 2.0 and unzipped it in the desktop. Im trying to run Aptana studio 2.0 under OpenSuse 11 and i get the following error... Any idea y? Thanks JVM terminated. Exit code=-1 -Xms40m -Xmx384m -Djava.awt.headless=true -XX:MaxPermSize=256m -Djava.class.path=/home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar -os linux -ws gtk -arch x86 -showsplash -launcher /home/avi/Desktop/Aptana Studio 2.0/AptanaStudio -name AptanaStudio --launcher.library /home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.0.200.v20090520/eclipse_1206.so -startup /home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar -application com.aptana.ide.desktop.integration.Application -vm /usr/lib/jvm/java-1.6.0-openjdk-1.6.0/jre/bin/../lib/i386/client/libjvm.so -vmargs -Xms40m -Xmx384m -Djava.awt.headless=true -XX:MaxPermSize=256m -Djava.class.path=/home/avi/Desktop/Aptana Studio 2.0/plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar

    Read the article

  • Mercurial hgwebdir configuration URL

    - by Jonathan Sternberg
    I'm setting up an hgwebdir configuration for the first time with Mercurial on apache2. I can see the three repositories I've set up in the first page, and I've figured out how to modify their names so they don't resemble the directory path. But when I click to go to one of the repositories, the URL becomes http://localhost/hg/hgweb.cgi/path/to/repos. I would like the directory to be http://localhost/hg/name instead as that is easier to remember for people who want to clone the repository. Is there anyway to do that with hgwebdir?

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >