Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 514/974 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Deploying Data Mining Models using Model Export and Import, Part 2

    - by [email protected]
    In my last post, Deploying Data Mining Models using Model Export and Import, we explored using DBMS_DATA_MINING.EXPORT_MODEL and DBMS_DATA_MINING.IMPORT_MODEL to enable moving a model from one system to another. In this post, we'll look at two distributed scenarios that make use of this capability and a tip for easily moving models from one machine to another using only Oracle Database, not an external file transport mechanism, such as FTP. The first scenario, consider a company with geographically distributed business units, each collecting and managing their data locally for the products they sell. Each business unit has in-house data analysts that build models to predict which products to recommend to customers in their space. A central telemarketing business unit also uses these models to score new customers locally using data collected over the phone. Since the models recommend different products, each customer is scored using each model. This is depicted in Figure 1.Figure 1: Target instance importing multiple remote models for local scoring In the second scenario, consider multiple hospitals that collect data on patients with certain types of cancer. The data collection is standardized, so each hospital collects the same patient demographic and other health / tumor data, along with the clinical diagnosis. Instead of each hospital building it's own models, the data is pooled at a central data analysis lab where a predictive model is built. Once completed, the model is distributed to hospitals, clinics, and doctor offices who can score patient data locally.Figure 2: Multiple target instances importing the same model from a source instance for local scoring Since this blog focuses on model export and import, we'll only discuss what is necessary to move a model from one database to another. Here, we use the package DBMS_FILE_TRANSFER, which can move files between Oracle databases. The script is fairly straightforward, but requires setting up a database link and directory objects. We saw how to create directory objects in the previous post. To create a database link to the source database from the target, we can use, for example: create database link SOURCE1_LINK connect to <schema> identified by <password> using 'SOURCE1'; Note that 'SOURCE1' refers to the service name of the remote database entry in your tnsnames.ora file. From SQL*Plus, first connect to the remote database and export the model. Note that the model_file_name does not include the .dmp extension. This is because export_model appends "01" to this name.  Next, connect to the local database and invoke DBMS_FILE_TRANSFER.GET_FILE and import the model. Note that "01" is eliminated in the target system file name.  connect <source_schema>/<password>@SOURCE1_LINK; BEGIN  DBMS_DATA_MINING.EXPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_SOURCE_DIR_OBJECT',                                 'name =''MY_MINING_MODEL'''); END; connect <target_schema>/<password>; BEGIN  DBMS_FILE_TRANSFER.GET_FILE ('MY_SOURCE_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '01.dmp',                               'SOURCE1_LINK',                               'MY_TARGET_DIR_OBJECT',                               'EXPORT_FILE_NAME' || '.dmp' );  DBMS_DATA_MINING.IMPORT_MODEL ('EXPORT_FILE_NAME' || '.dmp',                                 'MY_TARGET_DIR_OBJECT'); END; To clean up afterward, you may want to drop the exported .dmp file at the source and the transferred file at the target. For example, utl_file.fremove('&directory_name', '&model_file_name' || '.dmp');

    Read the article

  • How to get pngcrush to overwrite original files?

    - by DisgruntledGoat
    I've read through man pngcrush and it seems that there is no way to crush a PNG file and save it over the original. I want to compress several folders worth of PNGs so it would be useful to do it all with one command! Currently I am doing pngcrush -q -d . *.png then manually cut-pasting the files from the tmp directory to the original folder. So I guess using mv might be the best way to go? Any better ideas?

    Read the article

  • Have I fixed my partition problem with os x 10.5.8? Are my GPT and MBR back to normal?

    - by David Schaap
    I'm new to linux and I have overstepped by abilities. I tried dual booting os x 10.5.8 with ubuntu 11.10 with rEFIt, but I been having problems with partitioning. Instead of enduring more headaches, I've made the decision to simply use ubuntu on VirtualBox. I've tried to return my HDD to normal, but I am looking for confirmation that my partitions are ok. Here is the report from partition inspector: *** Report for internal hard disk *** Current GPT partition table: # Start LBA End LBA Type 1 409640 233917359 Mac OS X HFS+ Current MBR partition table: # A Start LBA End LBA Type 1 1 234441647 ee EFI Protective MBR contents: Boot Code: GRUB Partition at LBA 409640: Boot Code: None File System: HFS Extended (HFS+) Listed in GPT as partition 1, type Mac OS X HFS+ Also, my HDD directory has a bunch of extra folders in them and they appear to be ubuntu related, although it is no longer installed. folders like bin, sbin, cores, var, user, and so on. Those folders aren't supposed to be there, right? Thanks in advance.

    Read the article

  • Xen 4.0.1 on Ubuntu 10.10 not booting

    - by Disco
    I'm trying to get Xen 4.0.1 run as dom0 on a fresh/clean install of 10.10 desktop (x64). Followed the step by step tutorial at http://wiki.xensource.com/xenwiki/Xen4.0 I have the pvops kernel in /boot, also included the ext4 fs support by recompiling the kernel by : make -j6 linux-2.6-pvops-config CONFIGMODE=menuconfig make -j6 linux-2.6-pvops-build make -j6 linux-2.6-pvops-install Here's my grub entry : menuentry 'Xen4' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 insmod ext3 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set 2bf3177a-92fd-4196-901a-da8d810b04b4 multiboot /xen-4.0.gz dom0_mem=1024M loglvl=all guest_loglvl=all module /vmlinuz-2.6.32.27 root=UUID=2bf3177a-92fd-4196-901a-da8d810b04b4 ro module /initrd.img-2.6.32.27 } blkid /dev/sda1 gives the : /dev/sda1: UUID="2bf3177a-92fd-4196-901a-da8d810b04b4" TYPE="ext3" My partition shemes is : /boot (ext3) / (ext4) Whatever option i've tried i end up with : mounting none on /dev failed: no such file or directory And message complaining that it cannot find the device with uuid ... It's taking my hairs out, if somone has a clue ...

    Read the article

  • Postfix and tmpfs for /var/spool

    - by Rob Fisher
    My main disk is an SSD so in order to preserve its lifetime by reducing writes I followed some advice and made /var/spool a ram disk by adding this line to /etc/fstab: tmpfs /var/spool tmpfs defaults,noatime,mode=1777 0 0 Later I configured postfix because I have a RAID array on my system and mdadm wants to send me email if the RAID array fails which sounds like a fine idea. Email sending worked fine until I rebooted, at which point: postfix: fatal: open /etc/postfix-out/main.cf: No such file or directory The fix for this is apparently: mkdir /var/spool/postfix postfix check Then I found I also had to do: mkfifo /var/spool/postfix/public/pickup service postfix restart Now sending emails works fine...until the next reboot. So: what is the most correct way to recreate the contents of /var/spool/postfix automatically at boot time if it does not exist? I am using Ubuntu Server 12.04.

    Read the article

  • Tracking multiple subdomains and domains going to the same site, separately in Google Analytics

    - by miles
    I have a new site that has multiple top-level domains and subdomains all going to it: www.domain.com, campaign.domain.com, chicago.domain.com, domain2.com - all go to the same site/site directory. Right now I have one Google Analytics account profile set up for it, but I want to be able to track the traffic that is hitting those different URLs separately. The domains are being routed on the server-side (not .htaccess). How can I do this in Google Analytics? Do I need to create filters? Or create different profiles for each domain?

    Read the article

  • Ubuntu One Bookmark sync not working.

    - by Rob
    Everything in Ubuntu One sync works great except bookmark sync. I tried the wiki answer that said to run: killall beam.smp beam rm ~/.config/desktop-couch/desktop-couchdb.ini dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort This is what my terminal came back with: robin@robin-MIDWAY:~$ killall beam.smp beam beam: no process found robin@robin-MIDWAY:~$ rm ~/.config/desktop-couch/desktop-couchdb.ini rm: cannot remove `/home/robin/.config/desktop-couch/desktop-couchdb.ini': No such file or directory robin@robin-MIDWAY:~$ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort Error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. robin@robin-MIDWAY:~$ I'm a computer "newbie" so it's possible I'm doing something wrong, are there any tutorials out there on how to use the CouchDB? I have Bindwood installed.

    Read the article

  • Running a .bash file in Eclipse

    - by Anne Ambe
    I know this is really an Eclipse issue but I can't seem to login in their forum. I am running eclipse juno for some c/c++ development.However, I wrote a .bash script that initiate the entire program.As input argument to this script, I have a a configuration file which is one directory lower than the .bash file. In terminal I just do: ./startenb.bash ./CONF/ANNE it runs just fine. How can I configure the external tools in eclipse to take this file path as input argument? Any help or old thread vaguely addressing this issue is highly welcome.

    Read the article

  • Adding Custom Pipeline code to the Pipeline Component Toolbox

    - by Geordie
    Add ... "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\GacUtil.exe" /nologo /i "$(TargetPath)" /f  xcopy "$(TargetPath)" "C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components" /d /y to the 'Post Build event command line' for the pipeline support component project in visual studio. This will automatically put the support dll in the GAC and BizTalk’s Pipeline component directory on building the solution/project. Build the project. Right Click on title bar and select Choose items. Click on the BizTalk Pipeline Components tab and select the new component from the list. If it is not available in the list Browse to the BizTalk pipeline component folder (C:\Program Files\Microsoft BizTalk Server 2006\Pipeline Components) and select the dll.

    Read the article

  • Database Consolidation onto Private Clouds - updated for Oracle Database 12c

    - by B R Clouse
    One of our team's most popular white papers has been expanded and updated to discuss Oracle Database 12c.  Now available on our OTN page, the new version of Database Consolidation onto Private Clouds covers best practices for consolidation with pluggable databases that the new mulitenant architecture provides, and expanded information on the database and schema consolidation options.  These are the consolidation models the paper evaluates:   server  database  schema pluggable databases  Key considerations for consolidating workloads which the paper explores: Choosing a consolidation model How PDBs solve the IT complexity problem Isolation in consolidated environments Cloud pool design Complementary workloads Enterprise Manager 12c for consolidation planning and operations Many more white papers have been updated or are new for Oracle Database 12c. We'll continue to highlight those which tie directory to your journey to enterprise cloud.

    Read the article

  • Help with creating symbolic link

    - by user1737794
    I'm a little bit confused with how the symbolic links work. I hope someone can guide me in the right direction. I want to put a demo online from our software, which normally only runs locally on a Mac Mini. So I put all the files in the var/www from my Ubuntu 12.04 server installation. There are a lot of hardcoded links in the software which point to "/Applications/XAMPP/xamppfiles/htdocs/narrowcasting" Of course I can change all these code on my html/php files in /var/www, but that would be quite annoying. I hope I can fix this by creating a symbolic link. For example I have a directory called thumb in /var/www/thumb. The php code is trying to put a image in /Applications/XAMPP/xamppfiles/htdocs/narrowcasting/thumb. Can anyone give me a tip how to achieve this with a symbolic link? Thanks in advance.

    Read the article

  • Amnesia doesn't start due to audio problems

    - by james
    I have a problem with amnesia game. After Intro and clicking continue button few times, when game is supposed to start it crashes. Here is console output: ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib audio/pcm_bluetooth.c:1614:(audioservice_expect) BT_GET_CAPABILITIES failed : Input/output error(5) ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started I should mention I have integrated both graphic and sound card.

    Read the article

  • How to Encrypt Your Home Folder After Installing Ubuntu

    - by Chris Hoffman
    Ubuntu offers to encrypt your home folder during installation. If you decline the encryption and change your mind later, you don’t have to reinstall Ubuntu. You can activate the encryption with a few terminal commands. Ubuntu uses eCryptfs for encryption. When you log in, your home directory is automatically decrypted with your password. While there is a performance penalty to encryption, it can keep private data confidential, particularly on laptops that may be stolen. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • User Independant Share Folder

    - by ell
    At the moment, I have a folder in my home directory that is shared on my laptop and can also be accessed by the other windows desktop pcs in my network but now I have decided to make my home folder inaccessible by other users on my laptop so other people cannot look at my files if they have a user on my laptop. I set the permissions to none for everyone apart from me. I then changed the share folder (/home/elliot/Shared) to allow all access but my windows computers and other users on my laptop cannot access it even though they have the right permission, I think this is because they don't have access to the home folder in which the Shared folder is stored. Where should I store a new Shared folder on my laptop? Should I put it as /home/Shared? Or, alternatively is there a way I can allow other users to access my /home/elliot/Shared folder even if /home/elliot is inaccessible? Thanks in advance, ell.

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • How to reset .bashrc file which edited before to set PATH ANDROID sdk

    - by revan
    bash: export: `/home/entw/bin:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local /bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/bin': not a valid identifier bash: /home/entw/.bashrc: line 111: unexpected EOF while looking for matching `"' bash: /home/entw/.bashrc: line 112: syntax error: unexpected end of file entw@entwine-desktop:~$ This is the error i frequently getting in terminal, shows when opend termianl. The following commands i applied in terminal, sudo gedit $HOME/.bashrc and added some path varable like android SDK, and run the following command source ~/.bashrc got the error in terminal bash: export: `/home/entw/bin:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local /bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/bin': not a valid identifier bash: /home/entw/.bashrc: line 111: unexpected EOF while looking for matching `"' bash: /home/entw/.bashrc: line 112: syntax error: unexpected end of file entw@entwine-desktop:~$ but if i try to open agin that file shows the error file or directory not found. what do i do to set all correct ??, please any help? This forum i tried [forum]: http://forum.xda-developers.com/showthread.php?t=919425 "--point 2"

    Read the article

  • CSS just for most basic HTML

    - by Gerenuk
    I've read that my note system Wikidpad, which exports to very simple HTML, can use CSS (http://wikidpad.sourceforge.net/help/HtmlCss.html) The elements in the output are not more than basic headings, bullet points and tables. I'd like to try some kind of improved style, but I as I have no knowledge about CSS, so the best I can do is to save some Myfile.css to a directory :) However if I google "CSS template" I get all sorts of complicating results that I cannot make sense of :( Am I using wrong terminology? Can you suggest what I should search for or maybe you even know a ressource where a get a simple CSS file with some decent standard HTML elements. I do not wish to make custom adjustments.

    Read the article

  • Apache Server SSL Problems

    - by Kid XD
    Hi There is this weird problem going on with putting ssl on the server I keep on getting this error in the terminal after I already created the .key and .crt files but it keeps on saying I placed the files in the conf.d directory and I already configured the thing so there is something that I did wrong there I also used openssl to create a .key and the .crt files thanks for the help if anyone can service apache2 reload Syntax error on line 1 of /etc/apache2/conf.d/www.domainname.crt Invalid command '-----BEGIN', perhaps misspelled or defined by a module not included in the server configuration Action 'conftest' failed. The Apache error log may have more information. ...fail!

    Read the article

  • cifs mounted NAS drive treats subdirectories as files

    - by Mike Krejci
    I have a NAS drive mounted in my fstab as follows: //192.168.0.182/Videos /mnt/NAS_02 cifs iocharset=utf8,uid=65534,gid=65534,guest,rw 0 0 It mounts fine, and in the terminal using cd and ls you can view all the files and navigate all the directories. However, in any other program all subdirectories in the mnt/NAS drive are treated as files and I can't open them up. A tree like this for example: /Movies /Comedy /Drama Movie.mp4 I can enter the directory /Movies, but the directories /Movies/Comedy and /Movies/Drama show up as files and I can't enter them. The permissions are all fine. I just upgraded UBUNTU and was using smbfs on the same system before and it worked fine. It's under cifs that it no longer works. Any thoughts, I can't seem to find any solutions.

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • Hiding php includes from search spiders?

    - by 21stcn
    Quick and simple question. I have 80+ html files which I want to be crawled. They are individual product pages. Each of these pages calls its content using php includes. These php include files are in a separate folder on the server and contain the core content for the individual product pages. I just wanted to ask, if I use robots.txt or .htaccess to prevent crawling of the directory that holds the php content files, will there be no issue crawling the html pages which include these files? What I want to achieve is have the html files indexed with the php content included in them, but I don't want visitors landing on the php content pages, nor have these php files indexed as duplicate content. Just clarification needed as to whether it is safe to block spiders from accessing the php folder, without this affecting the html files being indexed with the included content. Is this the best way to do things? Or should I just leave the content php files to be crawled?

    Read the article

  • error with slap.d while installing any new software

    - by ali haider
    I am trying to install wireshark (this issue is not specific to wireshark) on my ubuntu box and I keep getting the following error for slap.d: Setting up slapd (2.4.23-6ubuntu6.1) ... Creating initial configuration... mkdir: cannot create directory `/etc/ldap/slapd.d': File exists dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd Besides uninstalling or trying to update open LDAP or slap.d, is there any other action that can be taken to resolve this issue? I am trying the install as root user & I have tried moving the slap.d conf file so far but without any luck. Any thoughts on troubleshooting/resolving this issue will be quite welcome. thank in advance

    Read the article

  • Mac host and Ubuntu guest on virtual box shared folder issue

    - by Thomas Ryan
    I have set up ubuntu server on virtual box on my mac. I created a shared folder which appears to be saved and visible in the configuration section on virtual box for the ubuntu server machine. My only issue is I don't know where this is or how to access it from inside ubuntu server. Does it get it's own directory or do I have to create some sort of a sym-link? If I do need to manually tell it to look in the mac for the file how do I reference the mac machine?

    Read the article

  • Does the keyword blog in url impove seo?

    - by slow diver
    I have seen a couple of site which has high number of hits. They are mostly tutorial sites and blogs that address software issue/errors. I wonder if the kewybord "blog" has a very positive effect in SEO? In my own site, I have install word press in root folder to avoid any blog keyword. I also did this to keep urls shallow (deeper url are not good for SEO). But I may want to think on it again. The sites I am referring too are http://blog.sqlauthority.com http://veerasundar.com/blog/2011/11/making-xampp-to-serve-any-directory-outside-htdocs/ I know there are standard (sort of) class names or ID that identify different contents and makes it easier for the search engine to identify contents like, "container", "menu". The use of word "blog" would mean this is about dicussing/tutoring something and have a very positive effect on SEO?

    Read the article

  • Getting a TV Capture Card working

    - by Benny Hallett
    I'm new to Linux, and am trying to get my Capture Card working on 11.04. The only command that I know to run to find out any information is lspci, which tells me that I have 02:00.0 Multimedia video controller: Conexant Systems, Inc. CX23885 PCI Video and Audio Decoder (rev 04) I've looked at using Me TV, but haven't worked out how to configure it for my card, or what I need to do to get it running. I'm not fussed on what software I use to run the Capture Card, but I've currently got only Me TV installed. Edit: When I run tvtime, I get the following errors: videoinput: Cannot open capture device /dev/video0: No such file or directory mixer: find error: Success mixer: Can't open mixer default, mixer volume and mute unavailable. mixer: Can't open device default/Line, mixer volume and mute unavailable. Segmentation fault

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >