Search Results

Search found 20561 results on 823 pages for 'automate everything'.

Page 296/823 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Easy BCD Help: Dual boot Win7 and Ubuntu 11.10 -- "Add new Entry" for Ubuntu

    - by Bradley Peterson
    I first had Ubuntu 11.10 installed on a single partition on my 750GB hard drive. I then partitioned the hard drive to 500GB (for Ubuntu) in ext4 format (what it already was from the clean install of Ubuntu)....and 250GB for Win7 in NFTS format. Then I installed Win7 onto that 250GB partition. Installation went smoothly and I was successfully booted into Win7 and setting everything up. After I was done doing all the stupid updates from Microsof, I thought I was done and I wanted to go back to Ubuntu. This is where the problem starts Of course I reboot and it goes directly to Win7. I research and find that Win7 has overwritten the Ubuntu bootloader, etc etc.. I don't fully understand it. I download EasyBCD 2.1.2 In EasyBCD, I select "Add New Entry" and select "Linux/BSD" and change the type to "GRUB 2" and name it "Ubuntu" Next, I go to "BCD Deployment" and select "Install the Windows Vista/7 bootloader to the MBR" and click "Write MBR" I reboot, select "Ubuntu" and the purple screen comes up, but NOTHING HAPPENS. If I hit Ctrl+Alt+Del, it goes to the Login menu where it acts normal for about 10-15 seconds, then freezes. It does this repeatedly every time. MY QUESTION: What's wrong here? Why can't I load Ubuntu now? Am I going to have to reinstall Ubuntu with Windows, then set up the bootloader with EasyBCD instead of Ubuntu, THEN Win7? Any and all help is appreciated! -Brad

    Read the article

  • Windows 7 can't connect via ethernet to Airport Extreme

    - by Mr AJL
    I have an Airport Extreme router, and everything works beautifully on the Mac, but my Windows 7 computer, which is connected via a regular ethernet cable, can't see any network. The Windows computer says there's no cable connected. Any ideas? Is there some kind of setting you have to enable with the Airport utility? I looked everywhere but can't find anything. Or do I need to install that Airport utility something on the PC just to connect to the router? (Haven't tried because the PC has no CD drive)

    Read the article

  • Don't Use "Static" in C#?

    - by Joshiatto
    I submitted an application I wrote to some other architects for code review. One of them almost immediately wrote me back and said "Don't use "static". You can't write automated tests with static classes and methods. "Static" is to be avoided." I checked and fully 1/4 of my classes are marked "static". I use static when I am not going to create an instance of a class because the class is a single global class used throughout the code. He went on to mention something involving mocking, IOC/DI techniques that can't be used with static code. He says it is unfortunate when 3rd party libraries are static because of their un-testability. Is this other architect correct? update: here is an example: APIManager - this class keeps dictionaries of 3rd party APIs I am calling along with the next allowed time. It enforces API usage limits that a lot of 3rd parties have in their terms of service. I use it anywhere I am calling a 3rd party service by calling Thread.Sleep(APIManager.GetWait("ProviderXYZ")); before making the call. Everything in here is thread safe and it works great with the TPL in C#.

    Read the article

  • Organizing an entity system with external component managers?

    - by Gustav
    I'm designing a game engine for a top-down multiplayer 2D shooter game, which I want to be reasonably reuseable for other top-down shooter games. At the moment I'm thinking about how something like an entity system in it should be designed. First I thought about this: I have a class called EntityManager. It should implement a method called Update and another one called Draw. The reason for me separating Logic and Rendering is because then I can omit the Draw method if running a standalone server. EntityManager owns a list of objects of type BaseEntity. Each entity owns a list of components such as EntityModel (the drawable representation of an entity), EntityNetworkInterface, and EntityPhysicalBody. EntityManager also owns a list of component managers like EntityRenderManager, EntityNetworkManager and EntityPhysicsManager. Each component manager keeps references to the entity components. There are various reasons for moving this code out of the entity's own class and do it collectively instead. For example, I'm using an external physics library, Box2D, for the game. In Box2D, you first add the bodies and shapes to a world (owned by the EntityPhysicsManager in this case) and add collision callbacks (which would be dispatched to the entity object itself in my system). Then you run a function which simulates everything in the system. I find it hard to find a better solution to do this than doing it in an external component manager like this. Entity creation is done like this: EntityManager implements the method RegisterEntity(entityClass, factory) which registers how to create an entity if that class. It also implements the method CreateEntity(entityClass) which would return an object of type BaseEntity. Well now comes my problem: How would the reference to a component be registered to the component managers? I have no idea how I would reference the component managers from a factory/closure.

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • Bind DHCP Server to Network Bridge

    - by Luke
    My wireless router died, so I decided to route everything through my server. So I installed a second NIC and a wireless card to be my new network: 1 NIC to the Modem, 1 NIC to the switch, and the Wireless to... Well, wireless. Anyways, I got far enough to get DHCP to work on just ONE adapter when I used Internet Connection Sharing (I couldn't get RRAS set up for the life of me), then I decided to try bridging the wireless and second NIC. Now, the DHCP server won't bind to the bridge, but I can enter manual IP's in my clients and it'll connect to the Internet. I also tried changing my wireless adapter's IP to 192.168.0.2, and to 192.168.1.1 to try to set up a separate scope, but to no avail. Running Windows Server 2003

    Read the article

  • Will being a VPN Client interrupt web pages hosted by IIS?

    - by f1gm3nt3d
    We have a dedicated server that is primarily used to host our website. I've been tasked with determining the feasibility of setting up a VPN connection from it to our Internal Network at our offices for a few ease of use purposes. My concern is that if I establish this VPN connection our Website will only be available internally and not to the internet in general. I'm concerned about this because in everything I read the fact is stated that by default all network traffic is routed over the VPN connection when it's established, is this also true for applications such as IIS that are listening for incoming connections? TL;DR Will having a VPN Client up and running cause a problem with server applications that may be listening on the NIC connected to the Internet due to changes that VPN makes in the routing tables?

    Read the article

  • Consuming Hello World pagelet in WebCenter Spaces

    - by astemkov
    Introduction The goal of this exercise is to show you how can you use Hello World pagelet that you just created from your web space. Assumptions Let's assume the following: Pagelet Producer is running on http://pageletserver.company.com:8889/pagelets/ WebCenter is running on http://webcenter.company.com:8888/webcenter/ You created Hello_World pagelet as described here. For our exercise we will need a space created. So let's login into WebCenter Portal and create a space called "myspace" using "Portal Site" template: Registering Pagelet Producer with WebCenter portal In order to use our newly created pagelet from WebCenter Spaces, we first need to register Pagelet Producer: Click "Administraion" link on WebCenter toolbar Open the "Configuration" tab Click on "Services" link on the upper-left corner of the page Click on "Portlet Producers" link on the right hand pane of the screen Click on "Register" button Select "Pagelet Producer" radio button and type Producer Name = "MyPageletProducer" Server URL = http://pageletserver.company.com:8889/pagelets/ Click "Test" button If everything is succesful you will see the following screen: Now click "OK'. Pagelet producer is registered: Inserting Hello World pagelet to WebCenter Space Now let's insert Hello World pagelet into "myspace" page: Let's go back to "myspace", click on the icon in a upper-right corner of the page and select "Edit Page" Click on one of the "Add Content" buttons: Select "Mash-Ups": Select "Pagelet Producers: You will see the MyPageletProducer that we just registered: Click on it. You will see the library "MyLib" that contains our "Hello_World" pagelet. Click on "MyLib" and you will see "Hello_World" pagelet. Click on "Add" button, and then "Close" button. Click "Save" button, and then "Close". Now we see that our "Hello World" pagelet is inserted into "myspace" page:

    Read the article

  • SDLC/Deployment/Documentation ERP/framework that minimizes developer misery

    - by foampile
    I was wondering if there are favorite SDLC/Deployment/Documentation/Versioning ERP/frameworks that work with popular SDLC methodologies, such as Agile, that minimize developer exposure to what most programmer hate to do most -- PAPERWORK ? Often, release management is extremely inefficient and there is a lot of data duplication across documents that are required to accompany changes -- e.g. when submitting a deployment request, I must list all files and their revisions from source control -- but why is that necessary if every file revision I check in is pinned to a work order and a deployment request is just a list of work orders -- such info should be able to be pulled from the system automatically without me needing to extract it and report it. And then there is a backout plan -- well just do everything in reverse from what you did to deploy -- why do you need specific instructions? Similar applies for documentation... So I am curious if there is an overall, all-encompassing ERP that includes source control and minimizes paperwork by sharing centralized data across different documents (such as documentation being pulled from javadoc without needing to write it separately) associated with SDLC yet does not compromise structure and control over the code base and release management.

    Read the article

  • Setting up a wireless access point with Ubuntu server

    - by Solignis
    I am trying to setup a wifi access point with my Ubuntu server, everything "seems" to be good since I got a better wifi radio (long story). But now I am a little confused, when my Android phone tries to associate with the AP it is able to with no problem but then it cannot get an IP address from my DHCP server. I have tried messing with firewall rules, nothing... I tried making a bridge interface as per what I am told I had to do. ( Not sure if i did it right though). What I am trying to do if make the wireless interface an extention of my eth0 network. Am I on the right track or am I going about the wrong way?

    Read the article

  • SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28

    - by pinaldave
    I have received many comments, email, suggestions and motivations for my current series of wait types and wait statistics. One of the questions which I keep on receiving almost every other day is whether all of the discussions I have presented so far are also applicable to SQL Server 2000. Additionally, I receive another question asking me if wait statistics matters in SQL Server 2000. If it is, then the asker wants to know how to measure wait types for SQL Server 2000. In SQL Server, you can run the following command to get a list of all the wait types: DBCC SQLPERF(waitstats) The query above will work in SQL Server 2005/2008/R2  because of backup compatibility. As you might have noticed, I have been discussing everything keeping SQL Server 2005+ in mind, but I have given little consideration on SQL Server 2000. However, I am pretty sure that most of the suggestions I have provided are applicable to SQL Server 2000. The wait types I have been discussing mostly exist in SQL Server 2000 as well. But the difference of the 2000 version is that it gets late recent releases, but it is worth it. Wait types are very essential to measure performance bottleneck. Because of this, I do not have to state that I am big fan of them just so I could identify performance bottleneck. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Running a small IPTV station

    - by nixterrimus
    I'm looking to run an iptv station for my dorm. I know I can serve multicast so that's not a problem. The station will serve out podcasts and other cc licensed content. The target endpoint is xbmc- a media center. So far I know that I need to serve an rtp stream over udp that's streaming an mpeg-4 avc main or high profile with aac ( or ac3 ?) audio. I've had some luck using vlc with vlm to stream but it seems limited. What are my other options?  Everything has to run on Linux- hopefully open source. How can I use playlists and not live streams? What are my software options?

    Read the article

  • Blank white desktop icons on new Windows 7 install

    - by SiegeX
    I just did a fresh install of Windows 7 professional on my father's Toshiba Satellite laptop and everything went smoothly except for the fact that certain desktop shortcuts appear as a blank white page. I've tried: going in and changing the icon to many different things reloading the icon cache by deleting the IconCache.db file uninstalling his Free AVG as he heard the virus scanner might be preventing it (it wasn't and he put it back.) The one thing these icons have in common is that they are shortcuts to some very old DOS executables. One of them is Word Star 2000 to give you an idea. Does anybody have any other suggestions besides what we've tried?

    Read the article

  • Copy and Paste without needing to use the Mouse

    - by Paul Farry
    In previous versions of windows when your were Copying Files (Ctrl-C) then alt-Tab (to the appropriate window) and Pasting (Ctrl-V) using the Keyboard everything could be driven by the keyboard. With Vista and 7, it seems if you are copying and pasting (and the files exist) in the Destination Location you have to Click the Copy and Replace, there doesn't seem to be a way to do it... "Do this for the next {n} Conflicts" can be driven by Alt-D, but the "Copy and Replace" and "Don't Copy" are not. Or is there and have I just missed something? I'm more than happy to change registry or something to enable Keyboard shortcuts for this.

    Read the article

  • Booting Ubuntu Failure : error: attempt to read or write outside of disk 'hd0'

    - by never4getthis
    I have installed ubuntu 12.10 in a WD external harddrive (320GB). This is a complete installation, not live USB. When I plug it in my HP desktop I go to the BIOS settings and boot off the harddrive, everything work perfectly -as it should. Now this works on everysingle computer and laptop in my house (all HP) -except for ONE. My HP ProBook 4530s. When I select to boot of the USB I get the Message: error: attempt to read or write outside of disk 'hd0' Now, I have removed the hdd from my laptop and the external drive is the ONLY drive plugged in. Bellow is a picture of the screen. After the message I navigate to ls / (as shown below): After here I try to acces other folders under ls /, for example, I try to go to ls /boot to get to the grub folder. Then I get the same message as before: as shown by the image below: The only folders I can access without getting the message again are /home, /run and /usr. So how do I: Boot Ubuntu from GRUB2 (this screen) manually Set to automatically boot Ubuntu If possible an explanation for this problem Thanks!

    Read the article

  • SQL SERVER – Microsoft SQL Server 2014 CTP1 Product Guide

    - by Pinal Dave
    Today in User Group meeting there were lots of questions related to SQL Server 2014. There are plenty of people still using SQL Server 2005 but everybody is curious about what is coming in SQL Server 2014.  Microsoft has officially released SQL Server 2014 CTP1 Product Guide. You can easily download the product guide and explore various learning around SQL Server 2014 as well explore the new concepts introduced in this latest version. This SQL Server 2014 CTP1 Product Guide contains few interesting White Papers, a Datasheet and Presentation Deck. Here is the list of the white papers: Mission-Critical Performance and Scale with SQL Server and Windows Server Faster Insights from Any Data Platform for Hybrid Cloud SQL Server In-Memory OLTP Internals Overview for CTP1 SQL Server 2014 CTP1 Frequently Asked Questions for TechEd 2013 North America Here is the list of slide decks: SQL Server 2014 Level 100 Deck SQL Server 2014 Mission Critical Performance LEvel 300 Deck SQL Server 2014 Faster Insights from Any Data Level Level 300 Deck SQL Server 2014 Platform for Hybrid Cloud Level 100 Deck I have earlier downloaded the Product Guide and I have yet not completed reading everything SQL Server 2014 has to offer. If you want to read what are the features which I am going to use in SQL Server 2014, you can read over here. Download Microsoft SQL Server 2014 CTP1 Product Guide Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Documenting C# Library using GhostDoc and SandCastle

    - by sreejukg
    Documentation is an essential part of any IT project, especially when you are creating reusable components that will be used by other developers (such as class libraries). Without documentation re-using a class library is almost impossible. Just think of coding .net applications without MSDN documentation (Ooops I can’t think of it). Normally developers, who know the bits and pieces of their classes, see this as a boring work to write details again to generate the documentation. Also the amount of work to make this and manage it changes made the process of manual creation of Documentation impossible or tedious. So what is the effective solution? Let me divide this into two steps 1. Generate comments for your code while you are writing the code. 2. Create documentation file using these comments. Now I am going to examine these processes. Step 1: Generate XML Comments automatically Most of the developers write comments for their code. The best thing is that the comments will be entered during the development process. Additionally comments give a good reference to the code, make your code more manageable/readable. Later these comments can be converted into documentation, along with your source code by identifying properties and methods I found an add-in for visual studio, GhostDoc that automatically generates XML documentation comments for C#. The add-in is available in Visual Studio Gallery at MSDN. You can download this from the url http://visualstudiogallery.msdn.microsoft.com/en-us/46A20578-F0D5-4B1E-B55D-F001A6345748. I downloaded the free version from the above url. The free version suits my requirement. There is a professional version (you need to pay some $ for this) available that gives you some more features. I found the free version itself suits my requirements. The installation process is straight forward. A couple of clicks will do the work for you. The best thing with GhostDoc is that it supports multiple versions of visual studio such as 2005, 2008 and 2010. After Installing GhostDoc, when you start Visual studio, the GhostDoc configuration dialog will appear. The first screen asks you to assign a hot key, pressing this hotkey will enter the comment to your code file with the necessary structure required by GhostDoc. Click Assign to go to the next step where you configure the rules for generating the documentation from the code file. Click Create to start creating the rules. Click finish button to close this wizard. Now you performed the necessary configuration required by GhostDoc. Now In Visual Studio tools menu you can find the GhostDoc that gives you some options. Now let us examine how GhostDoc generate comments for a method. I have write the below code in my code behind file. public Char GetChar(string str, int pos) { return str[pos]; } Now I need to generate the comments for this function. Select the function and enter the hot key assigned during the configuration. GhostDoc will generate the comments as follows. /// <summary> /// Gets the char. /// </summary> /// <param name="str">The STR.</param> /// <param name="pos">The pos.</param> /// <returns></returns> public Char GetChar(string str, int pos) { return str[pos]; } So this is a very handy tool that helps developers writing comments easily. You can generate the xml documentation file separately while compiling the project. This will be done by the C# compiler. You can enable the xml documentation creation option (checkbox) under Project properties -> Build tab. Now when you compile, the xml file will created under the bin folder. Step 2: Generate the documentation from the XML file Now you have generated the xml file documentation. Sandcastle is the tool from Microsoft that generates MSDN style documentation from the compiler produced XML file. The project is available in codeplex http://sandcastle.codeplex.com/. Download and install Sandcastle to your computer. Sandcastle is a command line tool that doesn’t have a rich GUI. If you want to automate the documentation generation, definitely you will be using the command line tools. Since I want to generate the documentation from the xml file generated in the previous step, I was expecting a GUI where I can see the options. There is a GUI available for Sandcastle called Sandcastle Help File Builder. See the link to the project in codeplex. http://www.codeplex.com/wikipage?ProjectName=SHFB. You need to install Sandcastle and then the Sandcastle Help file builder. From here I assume that you have installed both sandcastle and Sandcastle help file builder successfully. Once you installed the help file builder, it will be available in your all programs list. Click on the Sandcastle Help File Builder GUI, will launch application. First you need to create a project. Click on File -> New project The New project dialog will appear. Choose a folder to store your project file and give a name for your documentation project. Click the save button. Now you will see your project properties. Now from the Project explorer, right click on the Documentation Sources, Click on the Add Documentation Source link. A documentation source is a file such as an assembly or a Visual Studio solution or project from which information will be extracted to produce API documentation. From the Add Documentation source dialog, I have selected the XML file generated by my project. Once you add the xml file to the project, you will see the dll file automatically added by the help file builder. Now click on the build button. Now the application will generate the help file. The Build window gives to the result of each steps. Once the process completed successfully, you will have the following output in the build window. Now navigate to your Help Project (I have selected the folder My Documents\Documentation), inside help folder, you can find the chm file. Open the chm file will give you MSDN like documentation. Documentation is an important part of development life cycle. Sandcastle with GhostDoc make this process easier so that developers can implement the documentation in the projects with simple to use steps.

    Read the article

  • Double default gateway ubuntu server

    - by Elena
    I've just installed an Ubuntu server 9.10 on an EEEBox. This is my /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback auto wlan0 iface wlan0 inet static address 192.168.48.16 netmask 255.255.248.0 wireless-essid mynet auto eth0 iface eth0 inet static address xx.xx.xx.xx netmask 255.255.255.224 gateway xx.xx.yy.yy When I restart /etc/init.d/networking, I can access the eth0 ip address from the internet and I can ping the machines in my wifi network mynet. Everything works fine and I have one default gateway. But after some time if I check again the route I just find two default gateways: one is correct and is the previous one, but the other is the one of the wifi network. I have a quite low signal of mynet where my server is and sometimes the wifi just disconnect and then reconnect again. Then I think that this can be a problem and the dhcp of the wifi net, when reconnecting it also add a default gateway. Any idea on how to resolve this issue?

    Read the article

  • Allow traffic from ssl-vpn to enter ipsec tunnel on fortigate

    - by Sascha
    we configured our FortiGate 50B to route traffic from our local net 192.168.10.* (which is our office) to a remote network 172.29.112.* using an ipsec tunnel. Everything works fine as long my computer has an ip from 192.168.10.*. We can also connect to the office network from at home using a ssl vpn connection. Once connected we receive an ip from 10.41.41.*. Now I want to allow the traffic flow from 10.41.41.* to 172.29.112.* just like it does from the office network. Could somebody point me in the right direction what I would need to do? Thanks, Sascha

    Read the article

  • Cisco vlan entry missing in vlan.dat, but appears in running-config

    - by nLinked
    One of our vlan's (ID: 104) stopped working suddenly and computers on that vlan failed when trying to obtain a dhcp ip address. On the Cisco switch if we do show vlan, this command is supposed to shows the vlans in the vlan.dat file. We notice it has every other vlan except the one that is now missing. If we show the running-config or even the startup config, they DO show the missing vlan (as if everything is fine), but that vlan doesn't take effect. We tried deleting the "missing" vlan using clear vlan 104, but it says No subinterface configured for vLAN Identifier 104, so it's already missing. Recreating the vlan, saving and rebooting still doesn't add it into the vlan.dat or make the vlan work. The switch is in vtp server mode. Our startup config is here: http://pastebin.com/RHxxTG5p Any ideas appreciated.

    Read the article

  • sound stops working after a while in ubuntu 12.10

    - by clio
    i did a clean install on ubuntu 12.10. Everything seemed fine at first including sound but after a while it stopped working. To get is back i have to restart it, sometimes even more than once. Any idea on how to fix this? sudo aplay -L default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • Postgres 9.0 locking up, 100% CPU

    - by Jake
    We are having a problem where our Postgres 9.0 server occasionally locks up and kills our webapp. Restarting Postgres fixes the problem. Here's what I've been able to observe: First, usage of one CPU jumps to 100% for a few minutes Disk operations drop to ~0 during this time Database operations drop to 0 (blocks and tuples per sec) Logs show during this time: WARNING: worker took too long to start; cancelled WARNING: worker took too long to start; cancelled No Queries in logs (only those over 200ms are logged) No unusually long-running queries logged before or during Then the second CPU jumps to 100% The number of postgres processes jumps from the usual 8-10 to ~20 Matched by a spike in Postgres Blocks per second (about twice normal) Logs show LOG: could not accept SSL connection: EOF detected Queries are running but slow Restarting postgres returns everything to normal Setup: Server: Amazon EC2 Large Ubuntu 10.04.2 LTS Postgres 9.0.3 Dedicated DB server Does anyone have any idea what's causing this? Or any suggestions about what else I should be checking out?

    Read the article

  • MacBook Pro 15in High-res hard to read. What setting should I change?

    - by orokusaki
    I just bought a new MacBook Pro with the high-res screen (1680x1050), but I noticed that all text is so small that to read it my face has to be like 18 inches away. When I adjusted the resolution to be the next sizes down (1440 x 852, and 1440 x 852 stretched), as well as all the other smaller sizes it made everything look blurry (similarly to when you use Command + Scroll to zoom in, how the text is really soft on the edges, and difficult to read). Is there a setting somewhere that I'm missing, or another resolution settings area that I can use. I feel like this 2800 dollar notebook may be only good for movie watching otherwise. Thanks in advance.

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Multiple Windows Messenger Users Windows 8

    - by Kurt Woods
    I am trying out Windows 8 to see if I want to use it on my main computer. In Windows 8 I have my domain user account linked to my personal outook.com Live account. This is great for everything except Windows Live Messenger. I need to have my domain messenger account accessible. Is there a way to sign into multiple Messenger accounts on Windows 8, or at least sign into a different account without affecting every other app? Thanks! Kurt

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >