Search Results

Search found 18142 results on 726 pages for 'wcf configuration'.

Page 270/726 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • Equivalent of PHP setlocale in an APACHE config file

    - by Nicolas
    I need to display a date in french locale. A solution is to use setlocale(LC_TIME, 'fr_FR'); But I'm looking to set the locale directly in the configuration of my apache server. In the /etc/httpd/conf/httpd.conf file, I tried <VirtualHost *:80> ... SetEnv LC_TIME 'fr_FR' </VirtualHost> without any effect. In which php or apache configuration file should I define the LC_TIME variable?

    Read the article

  • Software Center - Items cannot be installed or removed until package catalog is repaired"

    - by Stephanie
    I tried to install back in time and now I keep getting the message 'items cannot be installed or removed until package catalog is repaired. I have tried sudo apt-get install -f then get Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: backintime-gnome The following packages will be upgraded: backintime-gnome 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 1 not fully installed or removed. Need to get 0 B/39.4 kB of archives. After this operation, 24.6 kB of additional disk space will be used. Do you want to continue [Y/n]? when I click Y, I get the following message dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: backintime-gnome E: Sub-process /usr/bin/dpkg returned an error code (1) stephanie@stephanie-ThinkPad-T61:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: backintime-gnome

    Read the article

  • How To Export/Import a Website in IIS 7.x

    - by Tray Harrison
    IIS 6 had a great feature called ‘Save Configuration to a File’ which would allow you to easily export a website’s configuration, to be later used to import either on the same server or another box.  This came in handy anytime you wanted to duplicate a site in order to do some testing without impacting the existing application.  So naturally, Microsoft decided to do away with this feature in IIS 7. The process to export/import a site is still fairly simple, though not as obvious as it was in previous versions.  Here are the steps: 1. Open a command prompt and navigate to C:\Windows\System32\inetsrv and run the following command: appcmd list site /name:<sitename> /config /xml > C:\output.xml So if you were wanting to export a website named EAC, you would run the following: If you’ll be setting up another copy of the site on the same server, you’ll now need to edit the output.xml file before importing it.  This is necessary in order to avoid conflicts such as bindings, Site ID, etc.  To do this, edit the XML and change the values.  Go ahead and make a copy of the home directory, and rename it to whatever folder name you specified in the output – /EAC2 in this example.  If you decide to change the app pool, make sure you go ahead and create the new app pool as well. Once these edits have been made, we are now ready to import the site.  To do that run: appcmd add sites /in < c:\output.xml So for our example it would look like this: That’s it.  You should now see your site listed when opening up Inet Manager.  If for some reason the site fails to start, that’s probably because you forgot to create the new app pool or there is a problem with one of the other parameters you changed.  Look at the System log to identify any issues like this.

    Read the article

  • XAMPP MySQL stops running after ~1.5 seconds

    - by Nona Urbiz
    I have tried installing it as a service. Nothing seems to work! I have checked the status page and MySQL is listed as "Deactivated". When trying to open phpMyAdmin I get: Error MySQL said: Documentation #1045 - Access denied for user 'root'@'localhost' (using password: NO) Connection for controluser as defined in your configuration failed. phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. and from the CD demo: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'root'@'localhost' (using password: NO) in C:\xampp\htdocs\xampp\cds.php on line 77 Could not connect to database! Is MySQL running or did you change the password?        Thanks for any suggestions or help you can give!

    Read the article

  • how to debug mysql has gone?

    - by fefe
    I have a virtual machine(Ubuntu 12.04, MySQL 5.5) running under VMware and is dedicated to host a mysql server. I connect to this server on internal IP. I'm trying to find out why I get mysql server has gone error. One my windows machines apache it stops because of this issue. I have been trying to fine tune my mysql my.cnf with the following parameters but did not bring the desired result. # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 0.0.0.0 # # * Fine Tuning # wait_timeout = 180 key_buffer = 384M max_allowed_packet = 64M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 500 table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M how to debug this issue what is missing from configuration to avoid this error?

    Read the article

  • Problem with APTonCD application

    - by Harikrishnan
    I created a iso image using aptoncd & burned it to a dvd. now when i tried to restore, the program does not detect the dvd in the drive. It shows "Please insert a disc in the drive." and if we click "ok" it shows "E: Failed to mount the cdrom.". The dvd is in the drive itself. I tried "sudo lshw -C disk" and the out put is: *-cdrom description: DVD-RAM writer product: DVDRAM GH22NS50 vendor: HL-DT-ST physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 logical name: /media/APTonCD logical name: /media/apt version: TN02 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 mount.fstype=iso9660 mount.options=ro,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 state=mounted status=ready *-medium physical id: 0 logical name: /dev/cdrom logical name: /media/APTonCD logical name: /media/apt configuration: mount.fstype=iso9660 mount.options=ro,relatime,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500 state=mounted Then i checked in disk utility application. in that dvd rom is shown as "/dvd/sr0". my ubuntu version is 10.10. please help me to solve the problem.

    Read the article

  • Can web applications running on IIS7 Windows Server 2008 R2 be forced to immediately detect changes to hosts file?

    - by Brenda Bell
    We have several web applications running on several load-balanced servers. We want to have our web applications communicate with each other without first traversing outside the load balancer. For example: http://appA.example.com is running on 192.0.2.1 and 192.0.2.2 http://appB.example.com is also running on 192.0.2.1 and 192.0.2.2 The load balancer's public IP address is 198.51.100.3 By default, when appA on 192.0.2.1 makes a call to a WCF service hosted in appB, the HTTP request is routed to 192.51.100.3; this establishes a new session and the load balancer will direct the call to either of the two servers We want the call to be routed to the instance of appB running on the same server so we add 192.0.2.1 appB.example.com to the hosts file on 192.0.2.1. This eventually works, but we either have to wait for the app pool to naturally recycle or do a manual reset before appA sees the new address. Is there any way to have the change automatically detected without having to recycle the app pool?

    Read the article

  • If your algorithm is correct, does it matter how long it took you to write it?

    - by John Isaacks
    I recently found out that Facebook had a programming challenge that if completed correctly you automatically get a phone interview. There is a sample challenge that asks you to write an algorithm that can solve a Tower of Hanoi type problem. Given a number of pegs and discs, an initial and final configuration; Your algorithm must determine the fewest steps possible to get to the final configuration and output the steps. This sample challenge gives you a 45 minute time limit but allows you to still test your code to see if it passes once your time limit expires. I did not know of any cute math solution that could solve it, and I didn't want to look for one since I think that would be cheating. So I tried to solve the challenge the best I could on my own. I was able to make an algorithm that worked and passed. However, it took me over 4 hours to make, much longer than the 45 minute requirement. Since it took me so much longer than the allotted time, I have not attempted the actual challenge. This got me wondering though, in reality does it really matter that it took me that long? I mean is this a sign that I will not be able to get a job at a place like this (not just Facebook, but Google, Fog Creek, etc.) and need to lower my aspirations, or does the fact that I actually passed on my first attempt even though it took too long be taken as good?

    Read the article

  • Security settings for this service require 'Basic' Authentication

    - by Jake Rutherford
    Had an issue calling WCF service today. The following exception was being thrown when service was called:WebHost failed to process a request. Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/35320229 Exception: System.ServiceModel.ServiceActivationException: The service '/InteliChartVendorCommunication/VendorService.svc' cannot be activated due to an exception during compilation.  The exception message is: Security settings for this service require 'Basic' Authentication but it is not enabled for the IIS application that hosts this service..Ensured Basic authentication was indeed enabled in IIS before getting stumped on what actual issue could be. Turns out it was CustomErrors setting. Value was set to "off" vs "Off". Would have expected different exception from .NET (i.e. web.config parse exception) but it works now either way.

    Read the article

  • slow virtualbox guest

    - by ecoologic
    I run a guest ubuntu 12.04 on a host ubuntu 12.04, with virtual box, and the guest is much, much slower than the host (ALT+TAB costs 4-5secs). I had a look around and I found contradicting opinions on virtualbox vs vmware (free), so I taught to keep the former. Both systems are updated, I installed the additions on the guest and I evenly split memory and video memory (64mb) between guest and host. I am running a toshiba m200 laptop with 4GB ram and shared video memory. The host bios does not include a configuration option for machine virtualization. I have 2 cpus and I can't give them both to the vm. Is there anything I overlooked that could solve my problem? Feel free to ask for more info, and thank you for any help. EDIT Idling with the monitor open the (single) guest cpu never gets below 55% and could raise to 80 - 90% just moving the mouse around, opening ff will cause the monitor to run 100% in the guest, while the host shows that both cpus are evenly working around 60%. My cpu is Intel® Core™2 Duo CPU T5450 @ 1.66GHz × 2. If this is not a configuration problem, does it mean my machine is too weak for virtualization?

    Read the article

  • lshw tells me my processor is a 64 bits but my motherboard has a 32 bits width

    - by bpetit
    Recently I noticed lshw tells me a strange thing. Here is the first part of my lshw output: bpetit-1025c description: Notebook product: 1025C (1025C) vendor: ASUSTeK COMPUTER INC. version: x.x serial: C3OAAS000774 width: 32 bits capabilities: smbios-2.7 dmi-2.7 smp-1.4 smp configuration: boot=normal chassis=notebook cpus=2 family=Eee PC... *-core description: Motherboard product: 1025C vendor: ASUSTeK COMPUTER INC. physical id: 0 version: x.xx serial: EeePC-0123456789 slot: To be filled by O.E.M. *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: 1025C.0701 date: 01/06/2012 size: 64KiB capacity: 1984KiB capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd... *-cpu:0 description: CPU product: Intel(R) Atom(TM) CPU N2800 @ 1.86GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 slot: CPU 1 size: 798MHz capacity: 1865MHz width: 64 bits clock: 533MHz capabilities: x86-64 boot fpu fpu_exception wp vme de pse tsc ... configuration: cores=2 enabledcores=1 id=2 threads=2 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 24KiB capacity: 24KiB capabilities: internal write-back unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 512KiB capacity: 512KiB capabilities: internal varies unified *-logicalcpu:0 description: Logical CPU physical id: 2.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 width: 64 bits capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 width: 64 bits capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 width: 64 bits capabilities: logical *-memory description: System Memory physical id: 13 slot: System board or motherboard size: 2GiB *-bank:0 description: SODIMM [empty] product: [Empty] vendor: [Empty] physical id: 0 serial: [Empty] slot: DIMM0 *-bank:1 description: SODIMM DDR3 Synchronous 1066 MHz (0.9 ns) product: SSZ3128M8-EAEEF vendor: Xicor physical id: 1 serial: 00000004 slot: DIMM1 size: 2GiB width: 64 bits clock: 1066MHz (0.9ns) *-cpu:1 physical id: 1 bus info: cpu@1 version: 6.6.1 serial: 0003-0661-0000-0000-0000-0000 size: 798MHz capacity: 798MHz capabilities: ht cpufreq configuration: id=2 *-logicalcpu:0 description: Logical CPU physical id: 2.1 capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 2.2 capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 2.3 capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 2.4 capabilities: logical So here I see my processor is effectively a 64 bits one. However, I'm wondering how my motherboard can have a "32 bits width". I've browsed the web to find an answer, without success. I imagine it's just a technical fact that I don't know about. Thanks.

    Read the article

  • How to quantify product work in Resume?

    - by mob1lejunkie
    One of things I do in my Resume is try to quantify the impact my work has had in the particular company I was with at the time. The reason is it shows the value my work had added to the business. Is this what you guys do as well or am I the only one? In my previous job this was easy as I worked on short/medium internal applications and it was fairly easy to measure end result. For example, external consulting company quoted $50,000 for an application Business Services department wanted I completed it in 3 days so I say I saved the company $48,000. I have been in my current job for 3 years but all of it has been on 1 single well established product. About 30% work is maintenance and 70% work is on new modules. I have worked on various modules like API (WCF), Security (2 factor authentication), etc. How should I quantify work on modules? Many thanks.

    Read the article

  • Q&amp;A: Where does high performance computing fit with Windows Azure?

    - by Eric Nelson
    Answer I have been asked a couple of times this year about taking compute intensive operations to Windows Azure and/or High Performance Computing on Windows Azure. It is an interesting (if slightly niche) area. The good news is we have a great paper from David Chappell on HPC Server and Windows Azure integration. As a taster: A SOA application running entirely on Windows Azure runs its WCF services in Azure Worker nodes. Download now Related Links: Other Q&A posts on my team blog Don’t forget to connect with the UK team if you stumbled across this post by accident/bing/google

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • How to turn off screen (DPMS) together with locking session in KDE?

    - by gertvdijk
    First of all, I'm aware a similar question for GNOME is asked here: "Switch off laptop backlight when locking screen". Objective I would like to turn off my screen on locking the session for power saving reasons. Actual problem Locking the screen on Kubuntu (KDE) inevitably triggers the screensaver as far as I can see. There's no screensaver option other than 'Blank screen' together with its background colour set to black that comes just close to my goal. It blanks the screen, but doesn't turn off the screen. Screen's backlight will still be on and not saving any power. Current workaround A workaround via a script + shortcut key is possible, however, it's just a workaround since it doesn't trigger on all ways to lock the session. Therefore, I think it should be possible to have it done more elegantly, for example by providing this option in KDE's configuration dialog of the screensaver. The workaround I am now using is the following. A script that locks the screen and turns off the screen: #!/bin/bash qdbus org.freedesktop.ScreenSaver /ScreenSaver Lock xset dpms force standby and let it run with a shortcut key via a custom menu entry. It works. Here's why I consider it to be a workaround rather than a solution. It doesn't work for other ways to trigger the locking of the session. My actual question(s) Do I need to touching/patching KDE's source? If not what are my options? If so, could someone point me to where I can get started? what do you think is the recommended place in the GUI for configuration? I'm using Kubuntu 12.04 and willing to upgrade to KDE 4.9 or waiting for the 12.10 release.

    Read the article

  • Now Available:Oracle Utilities Customer Care & Billing Version 2.4.0 SP1

    - by Roxana Babiciu
    We are pleased to announce the general availability of Oracle Utilities Customer Care & Billing 2.4.0 SP1. Key Features & Benefits: Oracle Utilities Customer Care & Billing 2.4.0 SP1 includes several base enhancements and a new licensable module called Customer Program Management. Key base enhancements in this release are: Configuration Migration Assistant (Additional Migration Plans) – Configuration Migration Assistant (CMA) was introduced in Oracle Utilities Application Framework V4.2.0 to supersede the ConfigLab facility. Oracle Utilities Customer Care and Billing now has a large number of migration plans to support migrating administration objects between environments. Encryption – Ability to configure encryption for fields that store sensitive data such as credit card numbers, bank account numbers, social security numbers, and MICR ID. Single Euro Payments Area (SEPA) Direct Debit – Functionality for configuring recurring direct debit payments in accordance with the Single Euro Payments Area (SEPA) initiative. Usage Enhancement for Bill Print – Allows additional information to be captured on a usage request to support billing when meter reads are not obtained from Oracle Utilities Customer Care & Billing but from a meter data management system (e.g. Oracle Utilities Meter Data Management). Preferences Portal – Communication preference zones allowing utilities to track customers’ preferred communication channels for various types of notifications or communications (e.g. phone, SMS, email). More information can be found on OPN!

    Read the article

  • alsa - sound issues on ubuntu 12.04

    - by tam_ubuuser
    i am having an sony E series laptop.i have an HDMI port .at this stage ,i have tested my sound card , which provides audio out on my laptop i.e i could hear songs .my laptop has two sound cards amd 5450 and an intel-hda(alsamixer shows that as s/pdif) . i decided to connect HDMI output to my new HD-TV.but, i could get only visuals on my TV,NO AUDIO OUTPUT ( HDMI cable works fine with win 7).my laptop has two sound cards.but i couldn't switch output to other card.( i don't know ,how to do that) i decided to update alsa. complied the following code in terminal. sudo apt-add-repository ppa:ubuntu-audio-dev/alsa-daily sudo apt-get update sudo apt-get install alsa-hda-dkms then,strangely no login sound, and no audio output on my laptop at all .then, started complied code from step1 sound troubleshooting procedure from offical ubuntu site.then, my speaker icon taskbar disappeared .obivously $aplay -l ,provided output as no soundcards detected . so , i implemented step 4 from that guide, it provides a output of all hardware devices in my laptop. *-multimedia UNCLAIMED description: Audio device product: Cedar HDMI Audio [Radeon HD 5400/6300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: latency=0 resources: memory:f0040000-f0043fff *-multimedia UNCLAIMED description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f5e00000-f5e03fff that command displayed output name of the two cards . but , still i have no positive output on $aplay -l. so therfore, i think alsa couldn't detect my sound cards . is there solution to this problem? it could be better,if alsa would channel output from multiple sound cards ? how should install and configure alsa such that detects HDMI cable as soon i connect to my HD tv? is it possible to alsa and pluseaudio 2.0 to co-exist, if so how?

    Read the article

  • Set up Gmail with Google apps for own domain

    - by erdomester
    I rent a server from a German company. I have remote access to it as well as WHM and CPanel. I decided to use Google's mail servers for obvious reasons. I am not an admin just an average guy trying to set up what needs to be set up. The problem is I am unable to make the necessary settings. I watched Youtube tutorials, followed written ones as well as Google's help, but there is (at least) one serious problem with my domain settings. The domain console alwasy says Your MX records are incorrect When I check dappwall.com in mxtoolbox.com it says Pref Hostname IP Address TTL 10 mail.dappwall.com 46.4.88.247 24 hrs But this is not the host name. I checked WHM and my hostname is server1.dappwall.com. I can confirm it by typing the hostname command in putty. However, if I do an mx lookup at mxtoolbox.com on server1.dappwall.com or mail.dappwall.com I get Lookup failed after 1 name servers timed out or responded non-authoritatively I ran checks on the google apps toolbox on dappwall.com and two problems emerged: 1.No Google mail exchangers found. Relayhost configuration? 10 mail.dappwall.com In Google Apps > Settings for Gmail > Advanced settings it also says that my current MX records for dappwall.com is Priority Points to 10 MAIL.DAPPWALL.COM. So mail.dappwall.com again. I also have access to a robot provided by the company I rent the server from. Here I see this mail at two places but how should I (if it's necessary) modify this? I set Email routing to Automatically Detect Configuration. 2.There SHOULD be a valid SPF record. "v=spf1 include:_spf.google.com ~all" In the DNS Zone Editor I added this spf record: Name TTL Class Type Record dappwall.com. 1440 IN TXT v=spf1 include:_spf.google.com ~all In the cPanel Email Authentication page it says SPF: Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for dappwall.com. [?] Your current raw SPF record is : v=spf1 include:_spf.google.com ~all How can I confirm that my server is an authoritative nameserver for dappwall.com? In WHM Service Configuration Mailserver selection Dovecot was set but I disabled it (i don't know if that's ok). What am I missing here? Where is that mail.dappwall.com coming from?

    Read the article

  • mod_rewrite and SEO friendliness

    - by John Doe
    My website has an atypical structure and I'm not sure if this could create problems in the long run, specially for SEO positioning purposes. I have a unique, large PHP script, and I use the Apache module mod_rewrite in the .htaccess file to create friendly URLs, for example: RewriteRule ^$ /index.php?section=Main RewriteRule ^createArticle$ /index.php?section=Main&view=CreateArticle RewriteRule ^configuration$ /index.php?section=Configuration RewriteRule ^article/([0-9]{1,10})$ /index.php?section=Article&view=Default&id=$1 RewriteRule ^deleteArticle/([0-9]{1,10})$ /index.php?section=Article&view=Delete&id=$1 RewriteRule ^reportArticle/([0-9]{1,10})$ /index.php?section=Article&view=Report&id=$1 RewriteRule ^logIn$ /index.php?section=Authentication ... So, www.example.com/index.php?section=Article&view=Default&id=105 would become www.example.com/article/105. The only real physical file is index.php, in which the parameters of the URL queried is processed and the corresponding result is outputted. My question is, do the crawling robots (e.g. Googlebot) recognize these links? Do they index the resulting HTML outputted by index.php with the specified parameters as if it was a actual HTML file? Also, would this become a problem when creating a Sitemap?

    Read the article

  • Congratulations to latest 2nd quarter Silverlight MVPs

    Congratulations to all the new/returning MVPs from all competencies, but I wanted to call out the newly awarded Silverlight MVPs for this latest round. Please join me in congratulating them: Xuan Qin (China) Mark Monster (The Netherlands) (@Mark_Monster) Rene Schulte (Germany) (@rschu) Seungmin Ha (Korea) Jaana Metsamaa (Estonia) And a specific call out also to Colin Blair(@SLColinBlair). Colins work in the WCF RIA Services space gained him recognition from the connected...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Wireless connection disconnects and reconnects with a Netgear WNA1000

    - by William Berkenkamp
    Ever since I made the permanent switch from Vista to Ubuntu i've had wireless connectivity problems. From watching the network manager when it disconnects it seems like it turns off the receiver for some reason. Could it be bad drivers? I used their install software and the site doesn't really offer driver downloads. The adapter is a Netgear WNA1000 if memory serves, and I don't know much about the router except that it's a Motorola Surfboard. And I figure this might help a bit *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 01 serial: 00:1b:b9:a7:39:a4 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=N/A latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:40 ioport:d800(size=256) memory:feaff000-feafffff memory:feac0000-feadffff *-network description: Wireless interface physical id: 1 bus info: usb@2:1.1 logical name: wlan0 serial: 00:26:f2:8b:fb:38 capabilities: ethernet physical wireless configuration: broadcast=yes driver=carl9170 driverversion=3.2.0-24-generic-pae firmware=1.9.4 ip=10.0.0.36 link=yes multicast=yes wireless=IEEE 802.11bgn I have tried installing WICD and it didn't fix the problem. Any help would be greatly appreciated. This problem is greatly limiting what I can do with my computer.

    Read the article

  • Transfer Win8 user settings between profiles [closed]

    - by GlennFerrieLive
    Possible Duplicate: How do I sync grouped Windows Store apps between devices? Is there a way for me to copy/save/transfer my "start menu" configuration, meaning the grouping and ordering of the elements on the Start screen, between user profiles? Is it in the registry? I am open to manual or "coded" suggestions. UPDATE: I'd like to VETO this closing. I am aware of the "roaming" profile behavior. I want to COPY my configuration BETWEEN profiles on the same machine.... DIFFERENT profile DIFFERENT person. I like the way my start screen is set up. i want to set my wife up with the same layout.

    Read the article

  • “yourdomain/start is not the same thing as yourIP/start in Apache”

    - by user1883050
    Let's say you're trying to get a CMS up and going. And say you're supposed to find a Start Page at "www.yourdomain.com/start" But you don't have a domain name yet. You only have an IP address (yourIPaddress). Apache is visibly running at yourIPaddress. So you look in "yourIPaddress/start" And you don't find anything there, just a 404 page. And the person who installed it for you tells you: "In Apache, yourdomain/start is not the same thing as yourIP/start. Please read up on Apache server configuration to figure this out. And that's all the help I can give." My question is: what concepts (re: Apache configuration) should I read up on so that I can find the start page? Thoughts?

    Read the article

  • unable to start apache after changes to rc.conf and resov.conf

    - by shupru
    I had a working configuration this morning with the following simple /etc/rc.conf ifconfig_rl0="DHCP" ifconfig_xl="inet 192.168.1.11 netmask 255.255.255." defaultrouter="192.168.1.1" I added the following lines: firewall_enable="YES" firewall_type="SIMPLE" firewall_logging="YES" sshd_enable="YES" apache_enable="YES" mysql_enable="YES" my httpd.conf includes: NameVirtualHost 192.168.1.11 <VirtualHost 192.168.1.11> ... </VirtualHost> now apache and ssh server are down. changed rc.conf back to last working configuration and still no ssh or apache apachectl start #--> /usr/local/sbin/apachectl start: httpd could not be started apachectl status #--> Looking up localhost Making http connection to localhost Alert!: Unable to connect to remote host.

    Read the article

  • puppet agent doesn't retrieve files from master

    - by nicmon
    I have a very basic question regarding to Puppet 3.0.1 configuration. I setup a puppet master server (CentOS) with 2 agents (CentOS and Windows 7), all 3 can ping and access each other. There is no error at all. I have copied a file under /etc/puppet/files/test2.txt my site.pp (/etc/puppet/manifests) contains these lines: node default { include test file { "/tmp/testmaster.txt": owner => root, group => root, mode => 644, source => "puppet:///files/test2.txt" } } but there will no file be created on agent servers under /tmp/ once I run "puppet agent --test" here is the output: [root@agent1 ~]# puppet agent --test Info: Retrieving plugin Info: Caching catalog for agent1.mydomain.com Info: Applying configuration version '1354267916' Finished catalog run in 0.02 seconds "puppet apply /etc/puppet/manifests/site.pp" creates the testmaster.txt under /tmp/ on master.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >