Search Results

Search found 42786 results on 1712 pages for 'install from source'.

Page 278/1712 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • How do I install Red5 using apt-get? Getting sub-process error

    - by Dalen
    This is copy from question of some guy on other forum that never got satisfiably answered. I encountered the same error few days ago on Ubuntu 13.04 Desktop. It seems like Red5 is installed but it cannot be run for some reason. Can anyone explain what is going on here? Why should dpkg fail? I mean, this is checked repo, it should work fine. apt-get install red5-server Selecting previously deselected package red5-server. (Reading database ... 53491 files and directories currently installed.) Unpacking red5-server (from .../red5-server_0.9.1-4squeeze1_all.deb) ... Setting up red5-server (0.9.1-4squeeze1) ... Starting Flash streaming server : red5-server failed! invoke-rc.d: initscript red5-server, action "start" failed. dpkg: error processing red5-server (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: red5-server E: Sub-process /usr/bin/dpkg returned an error code (1) Logfile error.log in /usr/share/red5/log was completely empty. Other logs were not but according to them, there were no problems at all.

    Read the article

  • Logitech USB headphones detected and selected in Debian Squeeze but sound still coming from speakers

    - by mattalexx
    I have a pair of Logitech wireless USB headphones that work with Ubuntu Natty but aren't working in Debian Squeeze. When they are selected as the default audio output, the sound comes out of the speakers instead of the headphones. I have rebooted and tried using a different USB port. My computer is a Thinkpad T510. How can I fix this problem? Here is lsusb: Bus 002 Device 005: ID 046d:0a29 Logitech, Inc. Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 005: ID 046d:c52f Logitech, Inc. Wireless Mouse M305 Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Here is cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf2420000 irq 17 1 [Headset ]: USB-Audio - Logitech Wireless Headset Logitech Logitech Wireless Headset at usb-0000:00:1d.0-1.1, full speed 2 [NVidia ]: HDA-Intel - HDA NVidia HDA NVidia at 0xcdefc000 irq 17 Here is the gnome-volume-control GUI: Here's lsmod | grep usb: snd_usb_audio 50670 0 snd_usb_lib 11192 1 snd_usb_audio usbhid 28008 0 hid 50909 1 usbhid snd_rawmidi 12513 2 snd_usb_lib,snd_seq_midi snd_hwdep 4054 2 snd_usb_audio,snd_hda_codec snd_pcm 47226 3 snd_usb_audio,snd_hda_intel,snd_hda_codec usbcore 98969 5 snd_usb_audio,snd_usb_lib,usbhid,ehci_hcd snd 34423 11 snd_usb_audio,snd_rawmidi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_seq,snd_timer,snd_seq_device nls_base 4541 1 usbcore Here's cat /etc/modprobe.d/alsa-base.conf: # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd && { /sbin/modprobe --quiet snd-ioctl32 ; /sbin/modprobe --quiet snd-seq ; } install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi && { /sbin/modprobe --quiet snd-seq-midi ; : ; } install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 && { /sbin/modprobe --quiet snd-emu10k1-synth ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 # Keep snd-pcsp from beeing loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 EDIT In VLC, I reset VLC prefs (Output: Default) and sound still comes out of speakers as expected. Then I change it to "Output: ALSA Audio output" and a Device menu appears. I select the headphones. When I then save the prefs, the audio switch to the headphones! But here's what's weird: I go back to prefs, change it to "Output: Default" and the headphones keep working. Maybe the ALSA option is actually what is being chosen as the "Default" option, but the Device menu (whose selection is still being used) is still set to the headphones. Anyway, now I need to figure out how to make it work as the default for the whole system.

    Read the article

  • Sonatype soumet le projet Open Source Tycho à la communauté Eclipse, la version 1.0 attendue pour Q3

    Bonjour, Sonatype a finalisé la proposition du projet Tycho en tant que projet Eclipse Le but de Tycho est de s'appuyer sur l'outil de build Maven pour construire des plugins Eclipse, features, update sites, applications RCP, et bundles OSGi. concrètement, Tycho correspond à un ensemble de plugins Maven. La liste des premiers committers serait à 100% Sonatype :Igor Fedorenko (project lead) Benjamin Bentmann Marvin Froeder Jason van Zyl Tycho se positionne sur le créneau des solutions Eclipse Buckminster, B3, PDE Build, et Athena. Certains d'entre vous se sont déjà intéressé à Tycho ? Que pensez...

    Read the article

  • IBus client for GNU Emacs: Installed, but how do I start it?

    - by fred.bear
    Having recently moved to Linux/Ubuntu, I'm looking for a good editor, and GNU Emacs seems to fit the bill. One thing I want from a text editor is the ability to handle Unicode Input Method Editors in a "normal way", across the board. For Ubuntu, the "normal way" is via IBus. However, emacs does not support IBus "off the shelf". I found a launchpad project: IBus client for GNU Emacs: ibus-el. I've installed ibus-el and set it up as per the Customize section of this emacswiki IBusMode page. I included the suggested "toggle" keybinding: ;; Use s-SPC to toggle input status It seems to have installed okay, but I have no idea how to invoke IBus and switch IMEs. s-SPC doesn't fire up the IBus language panel... I'm stuck :( ...so close, yet so far.... Here are the startup *messages* Loading 00debian-vars... No /etc/mailname. Reverting to default... Loading 00debian-vars...done Loading /etc/emacs/site-start.d/50autoconf.el (source)...done Loading /etc/emacs/site-start.d/50dictionaries-common.el (source)... Loading debian-ispell... Loading /var/cache/dictionaries-common/emacsen-ispell-default.el (source)...done Loading debian-ispell...done Loading /var/cache/dictionaries-common/emacsen-ispell-dicts.el (source)...done Loading /etc/emacs/site-start.d/50dictionaries-common.el (source)...done Loading /etc/emacs/site-start.d/50festival.el (source)...done Loading /etc/emacs/site-start.d/50gtk-doc-tools.el (source)...done Loading /etc/emacs/site-start.d/50ibus-el.el (source)...done IBus: Xlib.protocol.request.QueryExtension IBus: Agent successfully started for display ":0.0"

    Read the article

  • Error while compiling/installing PHP with FPM for RPM on Centos 5.4 x64

    - by Raymond
    Hi, I'm trying to make an RPM with PHP 5.3.1 and PHP-FPM 0.6 for CentOS 5.4. So far it goes quite well, but when rpmbuild gets to the installation phase it fails with the following error: Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.63379 + umask 022 + cd /usr/src/redhat/BUILD + cd /usr/src/redhat/BUILD/php-5.3.1/fpm-build/ + make install Installing PHP SAPI module: fpm Installing PHP CLI binary: /usr/bin/ cp: cannot create regular file `/usr/bin/#INST@12668#': Permission denied make: *** [install-cli] Error 1 error: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) I am running rpmbuild as a normal user, so it's understandable that it will fail to install anything into /usr/bin, but it shouldn't try to install anything outside the buildroot in the first place. I have however specified the BuildRoot in the header of the spec file and I can see it is passed correctly to the make install command. Does anyone have some idea of what is going wrong here? Thanks a lot!

    Read the article

  • How does copyrights apply to source code header files?

    - by Jim McKeeth
    It seems I heard that header files are not considered copyrightable since they can only be written one way (like a list of ingredients or facts). So a header file for a specific DLL will always look the same when written in a given programming language. Unfortunately I can't find any resources to back this up. So if a vendor provides an SDK with headers in one programming language, and then those headers are translated into another programming language by a third party. Does the 3rd party need permission from the vendor to provide the header translation? Who owns the copyright on the translation? Isn't it a derivative work still owned by the vendor, or is there no copyright, like a list of ingredients? Does this vary from jurisdiction to jurisdiction?

    Read the article

  • Include Binary Files in DEB package

    - by user22611
    I need to build a DEB package from mainly Node.js Javascript files, but it should include some binary files as well. They are listed inside debian/source/include-binaries. Otherwise I get the error message dpkg-source: error: unrepresentable changes to source The command in question is: bzr builddeb -- -us -uc After adding the file include-binaries, when running bzr builddeb -- -us -uc again, now I get a different error: It says dpkg-source: error: aborting due to unexpected upstream changes, see /tmp/mailadmin_0.0-1.diff.n6m5_6 I have no idea how to get rid of this. In the next line of output it tells me dpkg-source: info: you can integrate the local changes with dpkg-source --commit But if I run this command in the build area of my package, it gives me the unrepresentable changes to source error message again, even though debian/source/include-binaries is present in the build area as well. I am missing the way out of this... I tried deleting all files that are produced by the build process, still no success. Further details: The target directory is /opt/mailadmin. Since this directory is unusual, I listed it in the file debian/mailadmin.install (which contains one line:) opt/mailadmin opt/ The bzr builddeb process uses this file as expected.

    Read the article

  • open-source LAMP package for simple accouting and receiving?

    - by user26664
    Hello. I am involved with a computer-based charity where we take donation of old equipment, often recycle it, mostly rehabilitate it and make it available through grants and 'adoptions', and sell some items. What we're looking for is a LAMP package that can handle our records of donations of equipment, print receipts for donators, and also track our thrift store sales with receipts. Donators will want receipts of their donations for tax deduction purposes. We'll want to print reports of incoming items from time to time, say monthly and yearly. For the thrift store, we'll also need receipts for that, reports of cash, especially for reconciling cash drops, and also reports of items sold from time to time. I'm thinking this might be a single package, but it might be two. We don't want to track our shop inventory with either of these programs -- that's another program/project. We just need to know what was donated and what was sold. It must be open source, and ideally we'd like it to run on LAMP -- Linux, Apache, MySQL, PHP, but we will consider other open-source platforms.

    Read the article

  • Upgrade using downloaded $40 to Win8 x64 to blank SSD using only retail 32Bit XP install CD

    - by Ron
    Here is my situation. Purchase the downloadable $40 UPGRADE version of Windows 8. Install this upgrade to a new/blank SSD drive WITHOUT prior installation of retail version of XP/service pack 3 (prefered). I have the retail purchased 32Bit XP installation media and I also have a slipstreamed disc that contains service pack 2. This 32bit XP license was installed to a desktop PC that I have NOT used for years (its broke). Questions: Can I upgrade using the $40 download upgrade version from retail 32Bit XP to 64Bit Windows 8 directly to new SSD? without first installing 32bit XP to new SSD? If 32bit XP needs to be installed to perform the upgrade to 64bit Win 8 is service XP service pack 3 still available; likewise, if the boxed retail version of XP 32bit is required to be pre-installed to the new SSD before attempting the downoladed $40 upgrade to 64Bit Win, can a clean install be performed or is a undesired actual upgrade performed? From what I have read this is way to complicated. Ideally, I should be able to install $40 upgrade version Win 64bit directly to new/blank SSD, then during the license verification process enter both the Win8 64Bit upgrade key and retail XP 32bit key (over internet or phone call). Thanks Very Much In Advance for any insight!! Regards, Ron

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • apt-get doesnt download files from NFS location

    - by Pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • How to start/stop iptables in Ubuntu 12.04?

    - by imwrng
    I am using Ubuntu 12.04 . while learning some new things about iptables i cant through this . see at the image . while i am trying to start ,its saying as root@badfox:~# iptables -L -n -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination root@badfox:~# service iptables stop iptables: unrecognized service root@badfox:~# service iptables start iptables: unrecognized service Source: http://www.cyberciti.biz/tips/linux-iptables-examples.html Why i am getting like this ? EDIT: So my firewall already started but why i am not getting the output as i mentioned in the link at source link in first workout. . Here is my output root@badfox:~# sudo start ufw start: Job is already running: ufw root@badfox:~# iptables -L -n -v Chain INPUT (policy ACCEPT 4882 packets, 2486K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5500 packets, 873K bytes) pkts bytes target prot opt in out source destination root@badfox:~#

    Read the article

  • local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page

    - by jeff
    local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page. I got all local files from host to www dir under local wamp. I got database from host and loaded to new local db and used this tool to adjust site_on_web.com to "localhost/site_on_local" now the home page works great and can login to admin page but when click on reservations page and others of site then site just goes to the wamp server config page even though the url shows correctly as localhost/site_on_local/reservations my htaccess file is this # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress and rewrite-module is checked in the php-apache-apache modules setting. now when I uncheck the rewrite-module is checked in the php-apache-apache modules setting or I clear out the whole htaccess file then the pages just goto Not Found The requested URL /ritas041214/about-ritas/ was not found on this server. Please help as I am unsure now about my process to move local site up and down and be able to make it work and without this I am lost...

    Read the article

  • C: What is a good source to teach standard/basic code conventions to someone newly learning the language ?

    - by shan23
    I'm tutoring someone who can be described as a rank newcomer in C. Understandably, she does not know much about coding conventions generally practiced, and hence all her programs tend to use single letter vars, mismatched spacing/indentation and the like, making it very difficult to read/debug her endeavors. My question is, is there a link/set of guidelines and examples which she can use for adopting basic code conventions ? It should not be too arcane as to scare her off, yet inclusive enough to have the basics covered (so that no one woulc wince looking at the code). Any suggestions ?

    Read the article

  • How do I install the EW-7318Ug wireless drivers?

    - by user69731
    I'm on Ubuntu 12.04 and I need to install & configure Internet connection. I want to use montor mode in Wireshark. After Ubuntu installation my wireless card was recognized but it doesn't connect to the Internet. Internal wireless card works well. What should I do? I'm new to Linux. PC:~$ ifconfig -a eth0 Link encap:Ethernet HWaddr 64:31:50:0f:d4:70 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:45 Base address:0x8000 eth1 Link encap:Ethernet HWaddr e0:2a:82:aa:2d:3e inet addr:192.168.0.101 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::e22a:82ff:feaa:2d3e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20978 errors:0 dropped:0 overruns:0 frame:364651 TX packets:20949 errors:171 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:19299119 (19.2 MB) TX bytes:3024858 (3.0 MB) Interrupt:19 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4788 errors:0 dropped:0 overruns:0 frame:0 TX packets:4788 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:387017 (387.0 KB) TX bytes:387017 (387.0 KB) wlan1 Link encap:Ethernet HWaddr 00:1f:1f:44:c1:a4 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:10508 errors:0 dropped:0 overruns:0 frame:0 TX packets:6143 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14831319 (14.8 MB) TX bytes:644606 (644.6 KB) PC:~$ iwconfig lo no wireless extensions. wlan1 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth1 IEEE 802.11 Access Point: Not-Associated Link Quality:5 Signal level:225 Noise level:162 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions.

    Read the article

  • Ubuntu won't suspend anymore, but it did upon install.

    - by Bruce Connor
    I fresh installed Ubuntu 10.10 back when it came out, and my laptop was suspending fine. All of a sudden, I can't get my laptop to suspend anymore. It's an HP Pavilion dv2-1110, but I don't think it's a hardware issue, here's why: It suspended fine upon first install. I haven't installed any new kernels since then, but I have installed tons of packages, so it's probably a package. The suspend and hibernate options disappeared from the shutdown menu. If I press my keyboard's suspend button (or if I close the lid) I get the following message: If I try the command pmi action suspend, I get the error message: Error org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Hal was not provided by any .service files. If I try the command echo -n mem > sudo /sys/power/state I get absolutely no output and no visible effect. What might be causing this behavior? I thought a list of installed packages might be useful, but it's huge and I don't know how to post it here in collapse/expand mode or something. EDIT:Just in case someone asks, none of the installed packages are kdm or anything like that (which would justify the lack of options in gnome's shutdown menu).

    Read the article

  • How do I install mod_dav_svn module on an Apache / MAMP server?

    - by fettereddingoskidney
    How do I install additional modules into my server configuration? Currently all of the other modules are installed in /Applications/MAMP/Library/modules...and I see that they are mod_*.so source files, but I cannot seem to get mine to end up here... :? I am trying to set up an SVN repository and use my Apache (MAMP) server to serve the repository. I am using the subversion installation that came (pre-installed?) on Mac OS X 10.5. The repository is working, but I cannot access it remotely through my MAMP server using a client program (Dreamweaver CS5). When I try, I get an error from Dreamweaver, saying: Cannot connect to host xxx: Connection refused. This, I believe, is because I have not properly configured my Apache server to serve the svn repository. So, I added the following lines to my httpd.conf file: <Location /subversion> DAV svn SVNPath /Applications/MAMP/htdocs/svn/ AuthType Basic AuthName "Subversion repository" AuthUserFile /applications/mamp/htdocs/.htpasswd Require ServerAdmin </Location> Restarted the server with the command $ /Applications/MAMP/Library/bin/apachectl -k restart I used this path because otherwise the default apachectl path is set to /usr/sbin/apachectl, which is the location of the pre-installed command on Mac OS X, since the OS comes packaged with a built-in Apache server. And I get the error: Syntax error on line 1153 of /Applications/MAMP/conf/apache/httpd.conf: Unknown DAV provider: svn I checked the upper portion of httpd.conf and see that dav_module (mod_dav.so) is loaded and is in fact in my the modules directory of my server. However, mod_dav_svn is not installed in that directory nor is it in the LoadModule portion of httpd.conf. So I need to install it, right? I have tried installing modules into my MAMP server before but was never successful...because I don't know how to do it. Can someone please walk me through how to install that module? Thanks for your time!

    Read the article

  • Lancement du forum d'entraide dédié à Play! framework, le framework Open Source Java pour le dévelop

    Bonjour, Ce forum a pour objectif de permettre à la communauté francophone d'échanger autour de Play! framework, framework Web qui je le rappelle est à l'initiative du français Guillaume Bort, aidé d'autres francophones comme Nicolas Leroux, tous deux ayant déjà eu l'occasion de faire des présentations de Play! dans les JUGs. N'oubliez pas les ressources à votre disposition :Le site officiel anglais autour du framework Le Google Group pour vos questions en anglais La traduction français du tutoriel...

    Read the article

  • How to determine the source of a request in a distributed service system?

    - by Kabumbus
    Map/Reduce is a great concept for sorting large quantities of data at once. What to do if you have small parts of data and you need to reduce it all the time? Simple example - choosing a service for request. Imagine we have 10 services. Each provides services host with sets of request headers and post/get arguments. Each service declares it has 30 unique keys - 10 per set. service A: name id ... Now imagine we have a distributed services host. We have 200 machines with 10 services on each. Each service has 30 unique keys in there sets. but now to find to which service to map the incoming request we make our services post unique values that map to that sets. We can have up to or more than 10 000 such values sets on each machine per each service. service A machine 1 name = Sam id = 13245 ... service A machine 1 name = Ben id = 33232 ... ... service A machine 100 name = Ron id = 777888 ... So we get 200 * 10 * 30 * 30 * 10 000 == 18 000 000 000 and we get 500 requests per second on our gateway each containing 45 items 15 of which are just noise. And our task is to find a service for request (at least a machine it is running on). On all machines all over cluster for same services we have same rules. We can first select to which service came our request via rules filter 10 * 30. and we will have 200 * 30 * 10 000 == 60 000 000. So... 60 mil is definitely a problem... I hope to get on idea of mapping 30 * 10 000 onto some artificial neural network alike Perceptron that outputs 1 if 30 words (some hashes from words) from the request are correct or if less than Perceptron should return 0. And I’ll send each such Perceptron for each service from each machine to gateway. So I would have a map Perceptron <-> machine for each service. Can any one tall me if my Perceptron idea is at least “sane”? Or normal people do it some other way? Or if there are better ANNs for such purposes?

    Read the article

  • Where to install boot loader on a Zenbook Prime?

    - by Christians
    I cannot figure out where to install the boot loader on my Zenbook UX31A Prime. I have installed Ubuntu many times on normal hard drives, but this is the first SSD and I am struggling. Installed Ubuntu 12.04 64-bit selecting "UEFI: general" boot entry. Installation type: Something Else Created partition /sda5 mount as /, /sda6 mount as /home, /sda7 mount as swap Selected /dev/sda for boot loader installation. Other options are /dev/sda, /dev/sda1/dev/sda3 Windows 7 (loader) ... Grub comes up with 6 entries Ubuntu - this runs great Linux 3.2.0-29-generic recovery mode: mode hangs with "fb: conflicting fb hw usae interdrnfb vs EFI VGA - removing generic adapter" memtest86:erro: unknown command `linux 16' memtest86 serial: unknown command `linux 16' Windows 7 (loader) (on /dev/sda3): invalid EFI file path Windows Recovery Environment (on /dev/sda8): unknown command drivemap, invalid EFI file path. My workaround for booting Windows 7 is hitting ESC during boot, windows boot manager comes up and * for booting into Windows 7 I select "WIndows Boot Manager (PO: SanDisk ....". * for booting into Ubuntu I select ubuntu (P0: SanDisk...) How can I boot into Windows from Grub?

    Read the article

  • How to verify that all files are intact prior to install?

    - by Kalle H. Väravas
    I'm working on my CMS (in PHP platform) for a long time now. The main program is done and I'm currently developing the Installer part. Installation itself will be fairly simple: Upload all files Verify that the "content/" dir has correct permissions Check if ALL files are intact and not modified [This is the subject of this question] Insert the config data and first settings Run install (Generate all DB tables and insert sample data etc.) Now the question-mark is at step 3. How do I verify ALL files? Verification itself should compare all CMS root-directories files against a list from remote location. List should contain filename, filesize and filetype. This way the user can check, that there are no unnecessary or corrupted files, that could indicated a breach in the software. I have seen some software installers do that, but I cannot find any right now and there for I'm clueless on the most optimal method for this. Of course there always is a simple array trick, but there surely must be a better and faster method?!

    Read the article

  • How can i install ubuntu on my ntfs hdd without formatting?

    - by Ridvan Coban
    My hdd is just one partition in ntfs (500gb) and 430 gb is used by my photos/movies/music etc which i never will want to lose. Actually i installed ubuntu on a usb flash drive (using it right now) but it is too slow that way. But my problem is : My computer is damaged ( maybe chipset or but not sure) and none of the windows versions (xp,vista,7) works on my pc. I get blue screen error as soon as windows startup logo shows. But ubuntu just works flawless. That means i cannot use wubi. I wanted to shrink my hdd without losing data (which can be done in windows) but found nothing about that on ubuntu forums. Is this possible? Or install ubuntu on my ntfs filesystem? Note : I don't have chance to backup 400 gbs of data. Sorry for my question if it's written a bit compex. I hope you get the point and someone has an idea ;)

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • Unable to install ubuntu on a AMD 64 bit system with a AMD Radeon HD 6670 graphics card

    - by Tom Wingrove
    I’ve been running a dual boot system (Ubuntu/Windows7) for two years or so with no problems. I recently built an AMD 64 Bit System, re-installed Windows but when I went to load Ubuntu inside Windows, hit a snag. The screen view during installation became small square blocks of colour, which obviously is a graphics drive problem. I tried various live disks both 32 & 64 bit for, Ubuntu 12.04, 11.10 & 10.10, all but Ubuntu 10.10 had the same problem. Ubuntu 10.10 loaded ok, installed the presented ATI graphics driver as usual but was left with the AMD Unsupported watermark at the bottom right of the screen. The graphics card installed in the computer is an MSI ATI Radeon HD 6670 (in effect an AMD Radeon HD 6670). I am fairly new to linux and while I can install and tweak the OS, I am rather baffled as to what to do. So my question is will an up to date ATI Driver be released in the near future for installation/live disks? Or am I going to have to downgrade my graphics card to use linux? Yours Tom Wingrove

    Read the article

  • Does XNA/MonoGame have a text caching mechanism, or has an open source one been implemented?

    - by Casey
    I'm playing around with MonoGame, and I've noticed the SpriteFont class draws static text very inefficiently. Each time the text is drawn the spacing is recalculated. This isn't a big deal on my quad core PC, but on mobile applications it might be a problem. Before I go and program some text which caches the arrangement of its letters in an array and then feeds that array to the SpriteBatch, I would like to make sure there isn't something available to do this already, either in MonoGame itself or a class someone has implemented and made available for general use.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >