Search Results

Search found 20592 results on 824 pages for 'path variables'.

Page 554/824 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • New Certification Exam: "Oracle Database 12c: SQL Fundamentals" Released (1Z0-061)

    - by Brandye Barrington
    Oracle Certification begins testing this week for the new Oracle Database 12c Administrator Certified Associate (OCA) certification.  Testing for the Oracle Database 12c: SQL Fundamentals (1Z0-061) exam is now underway. Visit pearsonvue.com/oracle and register for exam 1Z0-061. You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification Website. Earning the Oracle Database 12c Administrator Certified Associate (OCA) credential demonstrates that you carry the foundational knowledge and skills needed to administer the Oracle Database, and sets the stage for your future progression to Oracle Database 12c Administrator Certified Professional (OCP). With Oracle Database 12c, you will experience the benefits of an Oracle Database that is re-engineered for Cloud computing. Multitenant architecture brings enterprises unprecedented hardware and software efficiencies, performance and manageability benefits, and fast and efficient Cloud provisioning. Oracle Database 12c certifications emphasize the full set of skills that DBAs need in today's competitive marketplace. Be among the first to obtain this ground breaking new Oracle Certified Associate (OCA) certification by registering for this exam today. QUICK LINKS Certification Path: Oracle Database 12c Administrator Certified Associate (OCA) Certification Exam: Oracle Database 12c: SQL Fundamentals (1Z0-061) Registration: pearsonvue.com/oracle

    Read the article

  • Updating to Exchange 2013 - any way to do it now?

    - by TomTom
    Exchange 2013 is out, available for some epople already. Got if from the VLC Center, now trying to get an upgrade path that works for some customers. Problem: There is no upgrade. It is "install on new Server, move mailboxes. This means coexistence with Exchagne 2010 for the time to move the Mailbox. Sadly the only compatible Exchange is Exchange 2010 Sp3 - which is not going to be bout for quite some time. Any way to still do an update? Backup, restore to new Server? Any beta of the SP that is good enough to ONLY move the mailboxes? I do not care about the rest - this really is "install Exchange 2013, move mailboxes, UNINSTALL 2010". I am quite - ah - unhappy that at the end the only one who will be able to intall 2013 are new companies right now.

    Read the article

  • Samba / smbd on Centos 6.5

    - by Satalink
    I've installed Samba4 and have the smb.conf file as follows: [global] workgroup = WORKGROUP server string = Samba Server realm = REXIALO.COM netbios name = REXIALO.COM security = user map to guest = Bad Password bind interfaces only = no interfaces = lo venet0 log file = /var/log/samba/samba.log max log size = 1000 [webroot] path = /usr/local/apache/htdocs comment = Example.com webroot directory read only = No I can connect from the same server with smbclient. Localhost: # smbclient -L localhost -U root Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- -------Enter root's password: network: # smbclient -L rexialo.com -U Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- ------- The problem is when I try to map to the smb webroot from Windows 7, it asks for user/pass but just times out and then prompts for credentials. The samba.log file does not show any activity other than the startup of the smbd process. Any help would be appreciated.

    Read the article

  • configuring default PYTHONPATH

    - by Shan
    I have Django application and few Django commands that I would execute through cronjobs on CentOS 5. Recently I updated my python-setuptools package, which in-turn update python-devel packages. After performing this update, the default PYTHONPATH settings for the Django commands executed through cronjob are different from the Django application which I execute from shell. Because of this mismatch my old Django cronjobs fail since the required libraries are not in path. How do I resolve this issue and ensure that both the cronjob Django commands and the Django application have the same environment?

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • Postfix sasl: Relay access Denied (state 14)

    - by Primoz
    I have postfix installed with dovecot. There are no problems when I'm trying to send e-mails from my server, however all e-mails that are coming in are rejected. My main.cf file: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix mail_owner = postfix inet_interfaces = all mydestination = localhost, $mydomain, /etc/postfix/domains/domains virtual_maps = hash:/etc/postfix/domains/addresses unknown_local_recipient_reject_code = 550 mynetworks = 127.0.0.0/8 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.3.3/samples readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes smtpd_recipient_restrictions = check_policy_service inet:127.0.0.1:9999, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, smtpd_sender_restriction = reject_non_fqdn_sender broken_sasl_auth_clients = yes UPDATE: Now, when e-mail comes to the server, the server tries to reroute the mail. Example, if the message was sent to [email protected], my server changes that to [email protected] and then the mail bounces because there's no such domain on my server.

    Read the article

  • Samba file shares - ownership of folder accessible for 1 group verified by MS active direcctory

    - by jackweirdy
    I have a machine set up to share a folder /srv/sambashare, here's an exerpt of the config file: [share] path = /srv/sambashare writable = yes The permissions of that folder are set at 700 and it is owned by nobody:nogroup at the moment. The problem I face is probably a simple one but I'm fairly new to Samba so I'm not sure what to do. The contents of the share should be accessible to a particular user who will authenticate with domain credentials, checked against Active Directory by kerberos. I haven't got kerberos configured yet as I wanted to test the share as soon as samba was configured, albeit basically, to ensure that it works. I've noticed that I can only access & write to the share when the folder is either owned by the user logging in or made world writable. The key issues are that this folder can't be world writable as it contains sensitive stuff, but at the same time can't be owned by a user or group since they come from the AD server. Anyone know what I should do?

    Read the article

  • Case in-sensitivity for Apache httpd Location directive

    - by user57178
    I am working with a solution that requires the usage of mod_proxy_balancer and an application server that both ignores case and mixes different case combinations in URLs found in generated content. The configuration works, however I have now a new requirement that causes problems. I should be able to create a location directive (as per http://httpd.apache.org/docs/current/mod/core.html#location ) and have the URL-path interpret in case insensitive mode. This requirement comes from the need to add authentication directives to the location. As you might guess, users (or the application in question) changing one letter to capital circumvents the protection instantly. The httpd runs on Unix platform so every configuration directive is apparently case sensitive by default. Should the regular expressions in the Location directive work in this case? Could someone please show me an example of such configuration that should work? In case a regular expression can not be forced to work case insensitively, what part of httpd's source code should I go around modifying?

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Package manager doesn't work anymore

    - by LukaD
    I'm using ubuntu 10.10 and recently my package manager has stopped working because of some problems with dependencies or something. I can't upgrade, install or uninstall anything at all. This is a huge problem. I couldn't find a solution to this with google so I'm asking here for help. This is what apt-get -f install outputs LANG=en_US.UTF-8 sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: firefox-4.0-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up openjdk-6-jre-headless (6b20-1.9.5-0ubuntu1) ... update-alternatives: error: alternative path /usr/lib/jvm/java-6-openjdk/jre/bin/java doesn't exist. dpkg: error processing openjdk-6-jre-headless (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: openjdk-6-jre-headless E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Remove Content-Length header in nginx proxy_pass

    - by Luc
    I use nginx with proxy path directive. When the application to which the request is proxied return a response, it seems nginx add some header containing the Content-Length. Is that possible to remove this additional header ? UPDATE I have re-installed nginx with the more_headers module but I still have the same result. My config is: upstream my_sock { server unix:/tmp/test.sock fail_timeout=0; } server { listen 11111; client_max_body_size 4G; server_name localhost; keepalive_timeout 5; location / { more_clear_headers 'Content-Length'; proxy_pass http://my_sock; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }

    Read the article

  • dhcp client service won't start

    - by xyious
    I have a Laptop with 2 network interfaces and neither will get an IP address through dhcp. I found out that the dhcp client service didn't start. Upon manually starting it gives the error 2: File not found. I have checked that the files were there (both svchost and dhcpcore .dll), the local service account has read access to the system32 folder, the path in the registry is also correct and I can access the file. I have tried to netsh winsock reset and ip reset all. I have even added the local service account to the administrators group. sfc /scannow also came up clean. I have no idea what else I can try. Any suggestions are welcome. (side note it's a windows 7 32 bit, atheros wlan, deinstalled avira before any of the other troubleshooting)

    Read the article

  • nginx proxy pass redirects ignore port

    - by Paul
    So I'm setting up a virtual path when pointing at a node.js app in my nginx conf. the relevant section looks like so: location /app { rewrite /app/(.*) /$1 break; proxy_pass http://localhost:3000; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Works great, except that when my node.js app (an express app) calls a redirect. As an example, the dev box is running nginx on port 8080, and so the url's to the root of the node app looks like: http://localhost:8080/app When I call a redirect to '/app' from node, the actual redirect goes to: http://localhost/app

    Read the article

  • Custom command in right-click menu not working

    - by Luke
    I have added, via the registry, a right click menu option for all filetypes which is supposed to get the MD5 checksum for a file. HKEY_CLASSES_ROOT*\shell\Checksum - Default: Get Checksum and HKEY_CLASSES_ROOT*\shell\Checksum\command - Default: checksum.cmd "%1" checksum.cmd simply clears the screen, calls fciv.exe using %1 and then pauses. Unfortunately, whilst the option "Get Checksum" appears correctly in the right click menu, it doesn't perform the right action when clicked. When I click it an "Open With" dialog opens, which is of course not what I want. Both checksum.cmd and fciv.exe are in the PATH. checksum.cmd: @echo off cls fciv.exe %1 pause Anybody know what's going on?

    Read the article

  • Capture the build number for a remote-triggered Hudson job?

    - by EMiller
    I have a very simple inhouse web app from which certain Hudson builds (on another server) can be triggered remotely. I have no problem triggering the builds, but I don't know how to capture the associated build number for later reference. I'm using the buildWithParameters trigger, and the actual result of that call is just a mess of HTML - I don't believe it gives me back the build number. I started down the path of pulling the whole build list for the job (via the api), and then attempting to reconcile that list against my records - but that's much more complicated than I'd like it to be. I also considered sleeping for a few seconds after launching the job, and then grabbing the latestBuild from the Hudson api - but I'm sure that's going to go wrong at some point (someone will fire off two jobs quickly, and I'll get the association wrong).

    Read the article

  • Samba share doesn't have write permissions

    - by blsub6
    alright, I've got one that should be really simple. I want a wide open smb share for my Windows 7 machine. Everyone should be able to access it, regardless of domain or username or anything. My smb.conf has: security = share guest account = nobody Along with: [DC_Backup] path = /Windows_Backups/DC comment = Backup of Domain Controller force user = nobody guest ok = yes public = yes read only = no I can access it, but I cannot write to it. Windows keeps telling me I "need permission to perform this action" Where do I start?

    Read the article

  • You can step over await

    - by Alex Davies
    I’ve just found the coolest feature of VS 2012 by far. I thought that being able to silence an exception from the “exception was thrown” popup was awesome, and the “reload all” button when a project file changes is amazing, but this is way beyond all of that. You can step over awaits when you debug your code!! With F10!!! Ok, so that may not sound such a big deal. You can step over ifs and whiles and no-one is celebrating. But await is different. await actually stops your method, signs up to be notified when a Task is finished,  returns, and resumes your method at some indeterminate point in the future. You could even end up continuing on a completely different thread. All that happens, and all I have to do is press F10. I used to have to painstakingly set a breakpoint on the first line of my callback before stepping over any asynchronous method. Even when we started using async, my mouse would instinctively click the margin every time I wanted to go past an await. And the times I was driven insane by my breakpoint getting hit by some other path of execution I don’t care about. I think this might have been introduced in the VS11 Beta, I’m pretty sure I tried it in the Async CTP in VS2010 and it didn’t work. Now it does! Woop!

    Read the article

  • Problem after installing node.js on Debian Lenny

    - by gmunk
    I managed to install node.js successfully on my machine but when invoking make test I get an error message: python tools/test.py --mode=release simple === release test-net-pingpong === Path: simple/test-net-pingpong server listening on 20989 localhost server listening on 20988 undefined Error: EAFNOSUPPORT, Address family not supported by protocol at net:1041:19 at dns:105:7 at EventEmitter._tickCallback (node.js:48:25) at node.js:176:9 I found out that EAFNOSUPPORT means that the OS does not support a particular protocol and a program tries to use it. So from what I can deduce my Debian does not have support for dns? Any help is appreciated!

    Read the article

  • nginx deny directory and files to be downloaded

    - by YeppThat'sMe
    gurus. I have a problem and i dont know how to solve it. I am working with Git and Compass/SASS on some projects. Now i want to protect those directories. When i go only to the folder its all fine – i get what i expected a 403 forbidden. location ~ /\.git { deny all; } But when i try use the full path to the config file from git the browser start to download it. Same scenario with compass. There is a config.rb file within the folder which also starts to download it. How can i prevent this behaviour? How can i deny downloading specific files?

    Read the article

  • SSH and Latent Connections (e.g., satellite connections)

    - by user71494
    Most of the week I live in the city where I have a typical broadband connection, but most weekends I'm out of town and only have access to a satellite connection. Trying to work over SSH on a satellite connection, while possible, is hardly desirable due to the high latency ( 1 second). My question is this: Is there any software that will do something like buffering keystrokes on my local machine before they're sent over SSH to help make the lag on individual keystrokes a little bit more transparent? Essentially I'm looking for something that would reduce the effects of the high latency for everything except for commands (e.g., opening files, changing to a new directory, etc.). I've already discovered that vim can open remote files locally and rewrite them remotely, but, while this is a huge help, it is not quite what I'm looking for since it only works when editing files, and requires opening a connection every time a read/write occurs. (For anyone who may not know how to do this and is curious, just use this command: 'vim scp://host/file/path/here)

    Read the article

  • What's the deal with NTFS tags in windows 7

    - by polarix
    So back in the days of 'longhorn' there was this WinFS idea which was both cool looking and scary looking. Then it seemed to disappear, but we were told that many of the concepts would be rolled into Vista. Then maybe Win7. Anyway, nowadays if you look at a win7 Explorer window, you can have columns that have a lot of tag-based info about a file (right click on column header-more...), including one called "tags". Is this something in NTFS that can be modified per-file somehow? Is its GUI hiding, or is this something that's infinitely-delayed, or is it just a figment of my imagination? Sure would be nice to be able to get around the NTFS path 256 character limit for searches, and to filter file folders per Excel 2007.

    Read the article

  • How to suppress the unsolicited footer when converting HTML -> PDF with Acrobat?

    - by gojira
    I often convert & combine (via contextmenu) HTML pages to PDF using Acrobat (not Acrobat Reader). I use Adobe Acrobat Pro 9 Extended, version 9.1.2. The converted PDFs always have the full path of the original file on the bottom of the PDF-page, also they have an additional header line with the document. I need to suppress that. I do not want the unsolicited header and footer in the resulting PDF files as they are a pain to reomve manually, with a certain page count per document it becomes impossible. Is it possible to suppres that and if, how?

    Read the article

  • "save the changes" message after removing the protection from workbook Excel 2010

    - by abbasi
    Some time ago I protected the Excel 2010 file from the path File Protect workbook Encrypt with password and gave it a password. Now that I removed that password via below method: Open the workbook and use Save As In the lower right of the file window will be "Tools" Choose "General Options" Clear the password. Save over your old file. the file is openable without wanting a password. But the problem is when I open it and close it immediately, even without moving the active cell, the message "Do you want to save the changes you made to 'test.lsx'?" appears. While there hasn't occurred any changes to that file so why I face this message any time I want to close the file? Hasn't the file been corrupted?

    Read the article

  • Drupal 7 on Windows - File Module Problems

    - by TimothyP
    Installed Drupal 7 using the Web Platform installer on Windows 2008 For some reason, the file module, when you upload a file, uses the first few letters of the filename as the unique key to store in the database, which of course causes problems very fast. I'm wondering does anybody have a workaround for this? An AJAX HTTP request terminated abnormally. Debugging information follows. Path: /file/ajax/field_file/und/0/form-EBMatHzV5cZXcWvXJtdADSdyw7Id9-GIpFM_NCJg_a4 StatusText: n/a ResponseText: Error message PDOException: SQLSTATE[23000]: [Microsoft][SQL Server Native Client 10.0][SQL Server]Cannot insert duplicate key row in object 'dbo.file_managed' with unique index 'uri_unique'. in drupal_write_record() (line 6776 of ..........\includes\common.inc). Error The website encountered an unexpected error. Please try again later. ReadyState: undefined (PS: I hope superuser is the right place to ask)

    Read the article

  • Virtualbox, merging snapshots and base disk

    - by Henrik
    Hi, I have a virtual machine with about 30 snapshots in branches. The current development path is 22 snapshots plus the base disk. The amount of files is seemingly having an impact now on IO and the dev laptop I'm using (don't know if it is host disk performance issues with the 140GB total size over a lot of fragments, or just the fact that it is hitting sectors distributed across a lot of files). I would like to merge the current development branch of snapshots together with the base disk, but I am unsure if the following command would produce the correct outcome. I am not able to boot this disk after the procedure completes (5-6 hours). vboxmanage clonehd "C:\VPC-Storage\.VirtualBox\Machines\CRM\Snapshots\{245b27ac-e658-470a-b978-8e62137c33b1}.vhd" "E:\crm-20100624.vhd" --format VHD --type normal Could anyone confirm if this is the correct approach or not?

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >