Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 288/975 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • Speakers will not work after I use USB Headset

    - by Josh K
    I am trying to configure my SteelSeries Siberia V2 Frost USB Headset to work with my 2.0 speakers using a jack. My goal is to find a easy way (no restart) to switch playback from my headset and my speakers and vice-versa. If i plug my headset in and make it the default device then restart my application/web page then the sounds works out of headset. If I switch the default to my speakers and restart apps/web pages then sound does not play. I know my speakers are on because if I configure them through windows and test, the sounds play, and sounds also play when I test it through my audio manager. Even if I unplug my headset, I still cannot get sound out of my speakers unless I restart My audio manager is RealTek HD Audio Manager, Windows 7 x64. I have tried the speaker back, usb front. speaker front, usb in front port. I have not tried speaker back, usb back.

    Read the article

  • Loading guest OS's (Windows) localhost through my host's (Mountain Lion) browsers

    - by Jonah Goldstein
    For work, I have to develop in Visual Studio, which I run via VMware's fusion 5. I really want to test via my mac's native browsers for a multitude of reasons. that is, view the IIs web stuffs that my windows VM should expose, in my mac's own native Firefox, Chrome... etc. if i could expose a pretty url, that would be even better, but i would certainly settle for an ugly IP :) I got a decent number of views but no response when I asked in VMware's own boards. Everyone seems to want to go the other direction (developing in sublimetext/textmate serving up through MAMP and exposing it to windows browsers to test) and there seems to be tried a true solutions for this. unfortunately (or fortunately depending on your preference) my startup is pretty entrenched in the visual studio development tools. I'm really hoping that someone knows the answer to this. Thanks :)

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-28

    - by Bob Rhubart
    Follow the action: OTN's YouTube Channel Check out what's happening at Oracle OpenWorld and JavaOne with video coverage by the OTN crew. New interviews and more posted daily on the OTN YouTube channel. Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's Tinsel Town, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems–and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 Overview about the 5th SOA, Cloud and Service Technology Symposium | Jan van Zoggel Middleware consultant and author Jan van Zoggel shares an overview of three of the sessions he attended at this week's SOA, Cloud, and Service Technology Forum in the UK. OOW 2012: Questions to get answered during this conference | Lucas Jellema Oracle ACE Director Lucas Jellema shares "a quick list of some of the questions that are on the top of my head to get answered during thus year's conference." The list may be quick, but it is quit detailed, and well worth a look. Front-ending a SAML Service Provider with OHS | Andre Correa Oracle Fusion Middleware A-Team member Andre Correa shares a follow-up to a previous post covering Integrating OBIEE 11g into Weblogic's SAML SSO. Thought for the Day "Simplicity is prerequisite for reliability." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • Cross-Forest Trust

    - by cdalley
    I am looking at testing a cross-domain trust we can have two domain controllers (with different forests and domain names) setup so we can move everyone onto the new domain. We do NOT run exchange on site and we do not have any links to O365 to AD currently. Onto the problem: I have setup two DCs in a Virtual Machine: They are on the same network 192.168.0.* The Windows 2003 server: Name: OLDSRVR "Clone" of our current Domain Controller IP: 192.168.0.1 Domain: internal.test.com The Windows 2012 server: Name: ADCTEST01 Brand new domain setup from scratch separate to internal.test.com Domain: internal.test2.com IP: 192.168.0.2 OLDSRVR can only see ADCTEST if it has dynamic IP set. If I set a static IP it cannot see it. If I try using the dynamic IP and try to join it gets to the end then complains "??The trust relationship between this workstation and the primary domain failed" Any ideas?

    Read the article

  • What is wrong with my expect script?

    - by Bryan
    I'm trying to learn how to use the expect command, to help me automate deployment of some software via shell scripts, and figured I start with something simple to get me started. I've created a file in my home dir called 'foo' using: touch foo And I've created the following script saved as test.exp #!/usr/bin/expect spawn rm -i foo expect "rm: remove regular empty file `foo'?" send "y\r" When I run the script using ./test.exp, it spawns the rm command, but it doesn't appear to send the Y and carriage return. I know I don't have a typo in the expect string, as I've used copy and paste to put in the script. What am I doing wrong?

    Read the article

  • Nginx ignoring client's HTTP 1.0 request and respond by HTTP 1.1

    - by Yoga
    I am testing using nginx/php5-fpm, with the code <?php header($_SERVER["SERVER_PROTOCOL"]." 404 Not Found"); // also tested: header("Status: 404 Not Found"); echo $_SERVER["SERVER_PROTOCOL"]; And force to use HTTP 1.0 with the curl command. curl -0 -v 'http://www.example.com/test.php' > GET /test.php HTTP/1.0 < HTTP/1.1 404 Not Found < Server: nginx < Date: Sat, 27 Oct 2012 08:51:27 GMT < Content-Type: text/html < Connection: close < * Closing connection #0 HTTP/1.0 As you can see I am already requesting using HTTP 1.0, but nginx reply me with HTTP 1.1

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • nginx timeout albeit ridicolous configuration

    - by Joa Ebert
    The scenario is an API server that should handle uploads. Posting on my.host.com/api/upload should do something with the body the client sends. However the API server has been designed to block the whole request until it fully processed the file, including some analysis which can take up to approx. 5min (...!). This has to change of course. In the meantime I wanted to setup nginx as a load balancer in front of the API servers. I quickly ran into a timeout issue, consulted Google and came up with this ridiculous test configuration: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; access_log off; sendfile on; send_timeout 3600; keepalive_timeout 3600 120; tcp_nopush on; tcp_nodelay on; gzip off; client_header_timeout 3600; client_body_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; proxy_connect_timeout 1800; proxy_next_upstream error; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And upstream test { server host1; server host2; } server { listen 80; server_name my.host.com; client_max_body_size 10m; location /api/ { proxy_pass http://test; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } Still, when an upload happens, I get the following result in the error.log: 2010/12/22 13:36:42 [error] 5256#0: *187359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: my.host.com, request: "POST /api/upload HTTP/1.1", upstream: "http://apiserver:80/upload", host: "my.host.com" What else could I do? If I look at the log of the API server I still see that it is processing the request and analyzing the file. But I think 3600 seconds as a timeout should be more than enough. This happens even after a could of seconds. And I did a reload and force-reload of the configuration as well of course.

    Read the article

  • PC doesn't power up anymore

    - by Andrew
    I will tell you the story behind the problem. The computer needed to be reinstalled, it has 2 HDDs so I had to un-plug one to see which is which, to save as much data as possible. After unplugging the first one, it booted with the reinstallation CD, it wasn't the HDD I was looking for so I turned it off and unplugged the other one. After unplugging and turning on, right after booting test, the HDD was turned off, only the CPU fan was working. Turned it off with the power supply's button help, holding the power button didn't do it. And now it doesn't want to turn on anymore. I tried with another test power supply but the result is the same, doesn't want to turn on. Any idea?

    Read the article

  • Using pscp and getting permission denied

    - by Espen
    I'm using pscp to transfer files to a virtual ubuntu server using this command: pscp test.php user@server:/var/www/test.php and I get the error permission denied. If I try to transfer to the folder /home/user/ I have no problems. I guess this has to do with that the user I'm using doesn't have access to the folder /var/www/. When I use SSH I have to use sudo to get access to the /var/www/ path - and I do. Is it possible to specify that pscp should "sudo" transfers to the server so I can get access to the /var/www/ path and actually be able to transfer files to this folder?

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • What have you learned from the bugs you helped discover and fix?

    - by Ethel Evans
    I liked the core of this question, and wanted to re-ask it in a way that made it less about 'fun' and more about 'What do these past mistakes tell us about how we can write and test software better?' As an SDET, I'm always looking for anecdotes about new and interesting ways that programs can fail. I've learned a lot from these tales in the past, and would like to get that from the intelligent people in this community as well. I'd be interested in hearing what the issue was, how it was caught, if you think there was anything that could have reasonably done to catch it earlier or to avoid the same issue on later projects, and any other interesting lessons you took away from this bug. Please only write about bugs you personally were involved with, ideally on a project you worked on (e.g., no "10 years before I was born, this happened and it was FUNNY!" answers). Please vote up answers that are thought-provoking or could change how you develop or test in some way, so this isn't just 'social fun'. Try to avoid voting up something just because it was funny.

    Read the article

  • How do i get Safari to ignore the SSL Certificate error?

    - by Tangopop
    In IE 6, 7, 8 and Firefox 3.6.3 and 3.0.5 i have installed a local SSL Certificate on the machine i am testing on and i have gotten the browser to igonre the SSL error (which is off one of my Web Test servers) Now i am tryin to do the same thing within safari 4 and with no luck. Basically i am running some automated scripts to test my website before they go live and i need to be able to ignore these errors as they will all run autonomosly. This is the error screen i am trying to avoid: http://library.bowdoin.edu/news/images/ezproxy-err/safari.jpg As i say i have installed the certificate locally and the IE 7 browser on the same machine works fine.

    Read the article

  • How do you manage updates without a staging environment: CentOS 6.3

    - by Gregg Leventhal
    I am managing about 20 servers, many of them virtual. They are almost all different purpose, and none are clustered. I have a distributed LAMP stack, a few application servers, some build servers, a few KVM hosts. They are CentOS 6.3 mostly with a few Ubuntu (unfortunately). I don't have the resources to setup a staging environment where I can have duplicates of my machines and test updates before rolling them out. I am taking file backups. What I want to know is how you are approaching backing up your Linux systems. I assume you don't just do yum update, but then how are you choosing the packages worthy of updating? When (if ever) are you updating the kernel, etc.. How do you test updates without a staging environment? Snapshot and hope for the best?

    Read the article

  • Dojo and Separate JavaScript File

    - by Bunch
    For a project I needed to use the ArcGIS API for some mapping. To use this you need to use Dojo but in this case all it really comes down to is adding some require lines and a addOnLoad on your web page. At first everything was working great, the maps rendered and the various layers would populate as needed. Once it was working I started moving the various javascript functions into their own files to keep everything nice and neat. Then the problems started, mainly the map would not show up any more. So that was a pretty big problem. Luckily the fix was pretty simple, just move the dojo.addOnLoad line into it’s own script tag. If I had the dojo.addOnLoad in the same script block as the various require lines it would not work as expected. Works: <script type="text/javascript" language="javascript" src="javascript/test.js" />     <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");  </script>  <script type="text/javascript">      dojo.addOnLoad(init);  </script> Does not work: <script type="text/javascript" language="javascript" src="javascript/test.js" /> <script type="text/javascript">       dojo.require("esri.map");       dojo.require("esri.tasks.locator");       dojo.require("esri.tasks.query");       dojo.require("esri.tasks.geometry");       dojo.addOnLoad(init); </script> Technorati Tags: JavaScript,Dojo

    Read the article

  • BI&EPM Partner Training and Specialisation Update

    - by Mike.Hallett(at)Oracle-BI&EPM
    1.     Just a reminder for you to take the New Version OBI11g Exams to update your OPN Specialisation @ OPN Exam for OBI Suite 11g is Now LIVE 2.     Check for places on free / subsidised Partner specific Bootcamps which are being run in several countries (and you can always fly there... it is still lower cost than alternatives !) : a.     Exalytics OBI11g Partner Training 3-day hands-on Workshops b.     EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp c.     Endeca Information Discovery 3-Day Hands-on Training Boot-Camp 3.     Other Partner Events a.     Frankfurt, Dreieich, November 15: Oracle Endeca Information Discovery b.     Utrecht, November 14: Oracle Bi Test Drives c.     Vilvoorde, November 16: Oracle Bi Test Drives d.     London, November 20: Delivering Insight Across Your Business - Oracle Business Intelligence Workshop e.     Milano, November 13: Oracle Drive Better Business Outcomes with Big Data and Analytics You can also selectively filter search for courses via the Partner Events Calendar @ http://events.oracle.com/search/search?group=Events&keyword=OPN+Only Otherwise, it is worth checking the Oracle Partner Enablement BLOG for any BI / EPM news, especially the sub-Blogs on the right for each country.  And there are many Self-Paced Tutorials for BI&EPM Partners available on demand at any time There is also a monthly Partner Enablement Update (PDF) to find out the latest partner training on Oracle's new products and new releases.

    Read the article

  • Using Apache Environment Variables to set custom ErrorDocument

    - by Tad
    I've got a set of RewriteCond rules that test for various mobile devices and then set environment variables like "env=device:.iphone" or "env=device:.smartphone" if the useragent matches an iPhone or Android device. I'm trying to now redirect the user to custom-styled 404/500 server error pages for each device, by way of the error pages. Ideally I'd like to be able to test for a variable being there, and then write in a custom ErrorDocument string. But an apache doesn't seem to work in this case. Any ideas how I can construct if/else tests in an apache conf file for environment vars?

    Read the article

  • How do you save/export changes made in Firebug?

    - by blunders
    Using Firebug to edit CSS, how do I save/export changes made to the CSS? TOOLS: Firefox, Firebug MAJOR UPDATE: If you know of a way to lock the forward/back/refresh on a FireFox tab, please let me know. Otherwise, I've given up on using FireBug/FireDiff as an IDE for CSS, it's nice, but lol... press backspace at the wrong time and ALL your work is gone... funny. So, really like the browser highlighting to CSS/HTML in Firebug. Know any good CSS editors that do this? Really had hope FireBug would work, but for now only see it as being good for ad-hoc inspection and test; meaning using it for what it's made for. UPDATES: @Lèse majesté: Just as an update, "Web Developer add-on" does let you edit CSS, but it does not let you edit/save CSS changes made by Firebug. Meaning you use Firebug to ID and maybe test changes, but it does not let you save the changes from Firebug. Here's a "how to" covering how to use them together: FF + FB + WD @Lèse majesté: Still playing around with FireDiff. It works okay, found one bug already (although I'm just working around it), and there's no "how to" I've been able to find, so I'm just trying every feature and clicking around... (for example, to export a diff you must be over the last item in the list, right click, and select as "Save Diff". The ".diff" is just a text file, no idea why at this point the ext is .diff.

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • Sharing My Thoughts on Space Flight

    - by Grant Fritchey
    This went out in the DBA newsletter from Red Gate, but I enjoyed writing it so much, I thought I'd share it to a wider audience: I grew up watching the US space program. I watched men walk on the moon for the first time in 1969, when I was only six years old. From that moment on, I dreamed of going into space. I studied aeronautics and tried to get into the Air Force Academy, all in preparation for my long career as an astronaut. Clearly, that didn't quite work out for me. But it sure could for you. At Red Gate, we're running a new contest: DBA in Space. The prize is a sub-orbital flight. When I first got word of this contest, my immediate response was, "And you need me to go right away and do a test flight? Excellent!" No, no test flight needed, plus I was pretty low on the list of volunteers. "That's OK, I'll just enter." Then I was told that, as a Red Gate employee, I couldn't win. My next response was, "I quit".eventually, I was talked down off the ledge, and agreed to help make this special for some other DBA. Many (most?) of us are science fiction fans, either the soft science of Star Trek and Star Wars, or the hard science of Niven and Pournelle, or Allen Steele. We watched the Shuttles go up and land. We've been dreaming of our own trips into orbit and our vacation-home on the Moon for a long, long time. All that might not arrive on schedule, but you've got a shot at breaking clear of the atmosphere. The first stage is a video quiz, starring Brad McGehee, and it's live at www.DBAinSpace.com now. Go for it. Good luck and God speed!

    Read the article

  • Internet Explorer 9 is coming Monday to a web near you

    - by brian_ritchie
    Internet Explorer 9 is finally here...well almost.  Microsoft is releasing their new browser on March 14, 2011. IE9 has a number of improvements, including: Faster, Faster, Faster.  Did I mention it is faster?   With the new browsers coming out from Mozilla, Google, and Microsoft, there have been a flood of speed test coverage.  Chrome has long held the javascript speed crown.  But according to Steven J. Vaughan-Nichols over at ZDNET..."for the moment at least IE9 is actually the fastest browser I’ve tested to date."  He came to this revelation after figuring out that the 32-bit version of IE9 has the new Chakra JIT (the 64-bit version doesn't).  It also has a DirectX-based rendering engine so it can do cool tricks once reserved for desktop applications. Windows 7 Desktop Integration.  Read my post for more details.  Unfortantely, they didn't integrate my ideas...at least not yet :) Hot new UI.  Ok, they "borrowed" some ideas from Chrome...but that is the best form of flattery. Standards Compliance.  A real focus on HTML5 and CSS3.  Definite goodness for developers. So, go get yourself some IE9 on Monday and enjoy! 

    Read the article

  • Testing DNS configuration of domain by using hosts file?

    - by Alex Blundell
    I'm currently migrating a website to another server, and want to test the DNS configuration (more specifically, email mx records) before moving the domain over. I've configured the DNS on the new server to have mx entries for Google Apps in the same way that it's configured on the old server. The domain is controlled by nameservers on the old server at the moment, so the change would simply be updating the nameservers to the new servers. (What I'm getting at is DNS is controlled at the server level, not registrar level). Since the website has quite a number of users, I want to make sure the configuration is right before flicking the switch. For this, can I add an entry to the hosts file of my local computer to point the domain to the new server? I've done this, and the web server works, but would this also test the email mx records on the new server?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >