Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 925/1012 | < Previous Page | 921 922 923 924 925 926 927 928 929 930 931 932  | Next Page >

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Melting Laptop Power Supply Tip

    - by AlReece45
    Several (6-7) months ago, my laptop power supply cord got a cut in it and stopped working. Having gotten cheap (and short) power supplies in the past, I decided to buy 2 brand new ones from the manufacturer (ASUS). Now, I used my laptop a little less than usual between February and March. During that time I noticed a few times that the power supply, even though plugged in, did not provide power. Often the computer would just off on me. I figured it was just that one power supply being bad. I had left the alternate at my parent's house in another state and asked them to ship it to me. Now, at work the other day I wanted to get a file off the of hard disk. So I booted it up, knowing that it had a low battery, plugged it in. During the first 2 minutes of use, I was told that the battery was low and I should plug it in. I unplugged it, inspected the end (Being plugged in, this was suspicious), and decided I shouldn't plug it back in-- the plastic on the tip was melting from the heat of the metal on the tip. The computer had simply booted up and I had the file-manager open. It had not been on for more than 10 hours. Now I know that computers tend to get pretty hot. However, the melting point of plastic is usually above 200C.. so that's much hotter than the computer should be generating. I went and bought a THIRD power supply. This time a universal one from Best Buy (it was very fast to buy and test). I tried it out on the computer and it's tip is melting as well. My older laptop that uses the universal power supply uses it perfectly (has been about a week and a part of use now). I have tried using the computer without the battery, with the same effect. Obviously, this is not a problem with the power supply. My room mate and I being trained computer techs were contemplating taking the computer apart and desoldering and resoldering on the power tip. (The computer is about 6 months out of its 2-year warranty). We're hoping that will correct the issue as I would prefer to devote my money on a Good Desktop rather than yet ANOTHER $1200+ laptop. Is there any thing I'm missing here that might cause the the tip on the power unit to melt?

    Read the article

  • Nginx https rewrite turns POST to GET

    - by x7311
    My proxy server runs on ip A and this is how people access my web service. The nginx configuration will redirect to a virtual machine on ip B. For the proxy server on IP A, I have this in my sites-available server { listen 443; ssl on; ssl_certificate nginx.pem; ssl_certificate_key nginx.key; client_max_body_size 200M; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 80; rewrite ^(.*) https://$http_host$1 permanent; server_name localhost 127.0.0.1; server_name_in_redirect off; location / { proxy_pass http://10.10.0.59:80; proxy_redirect http://10.10.0.59:80/ /; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } The proxy_redirect was taken from how do I get nginx to forward HTTP POST requests via rewrite? Everything that hits the public IP will hit 443 because of the rewrite. Internally, we are forwarding to 80 on the virtual machine. But when I run a python script such as the one below to test our configuration import requests data = {'username': '....', 'password': '.....'} url = 'http://IP_A/api/service/signup' res = requests.post(url, data=data, verify=False) print res print res.json print res.status_code print res.headers I am getting a 405 Method Not Allowed. In nginx we found that when it hit the internal server, the internal nginx was getting a GET request, even though in the original header we did a POST (this was shown in the Python script). So it seems like rewrite has problem. Any idea how to fix this? When I commented out the rewrite, it hits 80 for sure, and it went through. Since rewrite was able to talk to our internal server, so rewrite itself has no issue. It's just the rewrite dropped POST to GET. Thank you! (This will also be asked on Nginx forum because this is a critical blocker...)

    Read the article

  • Excel 2010: dynamic update of drop down list based upon datasource validation worksheet changes

    - by hornetbzz
    I have one worksheet for setting up the data sources of multiple data validation lists. in other words, I'm using this worksheet to provide drop down lists to multiple other worksheets. I need to dynamically update all worksheets upon any of a single or several changes on the data source worksheet. I may understand this should come with event macro over the entire workbook. My question is how to achieve this keeping the "OFFSET" formula across the whole workbook ? Thx To support my question, I put the piece of code that I'm trying to get it working : Provided the following informations : I'm using such a formula for a pseudo dynamic update of the drop down lists, for example : =OFFSET(MyDataSourceSheet!$O$2;0;0;COUNTA(MyDataSourceSheet!O:O)-1) I looked into the pearson book event chapter but I'm too noob for this. I understand this macro and implemented it successfully as a test with the drop down list on the same worksheet as the data source. My point is that I don't know how to deploy this over a complete workbook. Macro related to the datasource worksheet : Option Explicit Private Sub Worksheet_Change(ByVal Target As Range) ' Macro to update all worksheets with drop down list referenced upon ' this data source worksheet, base on ref names Dim cell As Range Dim isect As Range Dim vOldValue As Variant, vNewValue As Variant Dim dvLists(1 To 6) As String 'data validation area Dim OneValidationListName As Variant dvLists(1) = "mylist1" dvLists(2) = "mylist2" dvLists(3) = "mylist3" dvLists(4) = "mylist4" dvLists(5) = "mylist5" dvLists(6) = "mylist6" On Error GoTo errorHandler For Each OneValidationListName In dvLists 'Set isect = Application.Intersect(Target, ThisWorkbook.Names("STEP").RefersToRange) Set isect = Application.Intersect(Target, ThisWorkbook.Names(OneValidationListName).RefersToRange) ' If a change occured in the source data sheet If Not isect Is Nothing Then ' Prevent infinite loops Application.EnableEvents = False ' Get previous value of this cell With Target vNewValue = .Value Application.Undo vOldValue = .Value .Value = vNewValue End With ' LOCAL dropdown lists : For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value If .Validation.Type = 3 And .Validation.Formula1 = "=" & OneValidationListName And .Value = vOldValue Then ' Debug ' MsgBox "Address: " & Target.Address ' Change the cell value cell.Value = vNewValue End If End With Next cell ' Call to other worksheets update macros Call Sheets(5).UpdateDropDownList(vOldValue, vNewValue) ' GoTo NowGetOut Application.EnableEvents = True End If Next OneValidationListName NowGetOut: Application.EnableEvents = True Exit Sub errorHandler: MsgBox "Err " & Err.Number & " : " & Err.Description Resume NowGetOut End Sub Macro UpdateDropDownList related to the destination worksheet : Sub UpdateDropDownList(Optional vOldValue As Variant, Optional vNewValue As Variant) ' Debug MsgBox "Received info for update : " & vNewValue ' For every cell with validation For Each cell In Me.UsedRange.SpecialCells(xlCellTypeAllValidation) With cell ' If it has list validation AND the validation formula matches AND the value is the old value ' If .Validation.Type = 3 And .Value = vOldValue Then If .Validation.Type = 3 And .Value = vOldValue Then ' Change the cell value cell.Value = vNewValue End If End With Next cell End Sub

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Linux software RAID6: 3 drives offline - how to force online?

    - by Ole Tange
    This is similar to 3 drives fell out of Raid6 mdadm - rebuilding? except that it is not due to a failing cable. Instead the 3rd drive fell offline during rebuild of another drive. The drive failed with: kernel: end_request: I/O error, dev sdc, sector 293732432 kernel: md/raid:md0: read error not correctable (sector 293734224 on sdc). After rebooting both these sectors and the sectors around them are fine. This leads me to believe the error is intermittent and thus the device simply took too long to error correct the sector and remap it. I expect that no data was written to the RAID after it failed. Therefore I hope that if I can kick the last failing device online that the RAID is fine and that the xfs_filesystem is OK, maybe with a few missing recent files. Taking a backup of the disks in the RAID takes 24 hours, so I would prefer that the solution works the first time. I have therefore set up a test scenario: export PRE=3 parallel dd if=/dev/zero of=/tmp/raid${PRE}{} bs=1k count=1000k ::: 1 2 3 4 5 parallel mknod /dev/loop${PRE}{} b 7 ${PRE}{} \; losetup /dev/loop${PRE}{} /tmp/raid${PRE}{} ::: 1 2 3 4 5 mdadm --create /dev/md$PRE -c 4096 --level=6 --raid-devices=5 /dev/loop${PRE}[12345] cat /proc/mdstat mkfs.xfs -f /dev/md$PRE mkdir -p /mnt/disk2 umount -l /mnt/disk2 mount /dev/md$PRE /mnt/disk2 seq 1000 | parallel -j1 mkdir -p /mnt/disk2/{}\;cp /bin/* /mnt/disk2/{}\;sleep 0.5 & mdadm --fail /dev/md$PRE /dev/loop${PRE}3 /dev/loop${PRE}4 cat /proc/mdstat # Assume reboot so no process is using the dir kill %1; sync & kill %1; sync & # Force fail one too many mdadm --fail /dev/md$PRE /dev/loop${PRE}1 parallel --tag -k mdadm -E ::: /dev/loop${PRE}? | grep Upda # loop 2,5 are newest. loop1 almost newest => force add loop1 Next step is to add loop1 back - and this is where I am stuck. After that do a xfs-consistency check. When that works, check that the solution also works on real devices (such a 4 USB sticks).

    Read the article

  • Galaxy Tab 3 producing continuous LightSensor error in LogCat

    - by Richard Tingle
    I am using a Galaxy Tab 3 as a test device for writing an android app. As such I'm interested in the output of the LogCat which is being filled with these error level messages. The device itself appears to work correctly, apps which rely on the light sensor correctly respond to it and the number in the error itself goes down if the light sensor is obscured. If I wasn't using it to develop apps I wouldn't even be aware of the issue but I believe it is an issue with the device itself not my app: simply plugging the tab 3 into the computer and using Eclipse - ADT to look at the LogCat without any app running leads to these errors being shown. I know I could filter the LogCat to ignore these errors but inconvenience aside; they concern me. A sample of the log cat is below (it generates errors continuously). This is on verbose so it includes some debug level (D/) messages as well as the error level messages (E/). How can I correct the device to no longer generate these errors. 06-11 10:08:45.789: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:45.992: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.195: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.398: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.601: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.804: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.007: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.210: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.414: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.617: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.820: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.023: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.039: D/dalvikvm(15201): GC_CONCURRENT freed 1947K, 17% free 16973K/20359K, paused 13ms+13ms, total 50ms 06-11 10:08:48.226: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.429: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:48.835: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.039: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.242: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.445: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:49.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:49.648: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.851: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13

    Read the article

  • Concerning persistence size in the Linux Live Creator

    - by user63085
    Message : Hello everyone! I have ,for the last several months, used the Linux Live USB Creator which it is a very useful app to make portable OS on to flash drives. I mostly use this application to test and try out new OS's as they are released, before I decide to make a hard disk installatio on to the computer. In many cases, the application developers will allow the “persistence” feature in the flash-drive-installed OS, which is just another way of saying that after multiple boot-ups and shutdowns, all the changes made to the OS will be saved in the flash-drive. But I have a question about the limit of the Persistence size in Linux Live USB Creator (currently version 2.6). I install Super OS 10 on to a partition on my external drive which has 30 GB. I wanted to reserve 10 GB for the persistence so that I can install more applications and space will not run out as I update the installed applications or when I do system updates. But why is it that only 3950 MB can be put for persistence? It would be great if, when desired, as much more persistence space could be set aside so that the space will not run out soon. Also, as I have installed the OS on a 30 GB drive, I tried to see how much space is left. But it seems only the remaining of the Persistence space is displayed when I click on the File System folder. For example, after I have just installed it now, there is 3.5 GB of free space. Where can I access the remaining 26 GB or so drive space which is in the same drive? How do I access it Sir?? It would be helpful if any one could explain and help me with this. Most importantly, it would be a big relief if the persistence can be somehow expanded by a work-around so that I can continue using my SuperOS 10.04 (now heavily customized) OS, which unfortunately has just over 576 MB of space left now, after I removed OpenOffice.org and installed the Libre Office earlier today. This is what remains from the maximum allowable 3950 MB of space for persistence at set-up. Thanks in advance!

    Read the article

  • Nginx Rewrite Rule For File Within Folder Not Working

    - by user3620111
    Good evening everyone or possible early morning if you are in my neck of the woods. My problem seems trivial but after several hours of testing, researching and fiddling I can't seem to get this simple nginx rewrite function to work. There are several rewrites we need, some will have multiple parameters but I cant even get this simple 1 parameter current url to alter at all to the desired. Current: website.com/public/viewpost.php?id=post-title Desired: website.com/public/post/post-title Can someone kindly point me to as what I have done wrong, I am baffled / very tired... For testing purposes before we launch we were just using a simple port on the server. Here is that section. # Listen on port 7774 for dev test server { listen 7774; server_name localhost; root /usr/share/nginx/html/paa; index index.php home.php index.html index.htm /public/index.php; location ~* /uploads/.*\.php$ { if ($request_uri ~* (^\/|\.jpg|\.png|\.gif)$ ) { break; } return 444; } location ~ \.php$ { try_files $uri @rewrite =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location @rewrite { rewrite ^/viewpost.php$ /post/$arg_id? permanent; } } I have tried countless attempts such as above @rewrite and simpler: location / { rewrite ^/post/(.*)$ /viewpost.php?id=$1 last; } location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I can not seem to get anything to work at all, I have tried changing the location tried multiple rules... Please tell me what I have done wrong. Pause for facepalm [relocated from stack overflow as per mod suggestion]

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • Apache VirtualHost Blockhole (Eats All Requests on All Ports on an IP)

    - by Synetech inc.
    I’m exhausted. I just spent the last two hours chasing a goose that I have been after on-and-off for the past year. Here is the goal, put as succinctly as possible. Step 1: HOSTS File: 127.0.0.5 NastyAdServer.com 127.0.0.5 xssServer.com 127.0.0.5 SQLInjector.com 127.0.0.5 PornAds.com 127.0.0.5 OtherBadSites.com … Step 2: Apache httpd.conf <VirtualHost 127.0.0.5:80> ServerName adkiller DocumentRoot adkiller RewriteEngine On RewriteRule (\.(gif|jpg|png|jpeg)$) /p.png [L] RewriteRule (.*) /ad.htm [L] </VirtualHost> So basically what happens is that the HOSTS file redirects designated domains to the localhost, but to a specific loopback IP address. Apache listens for any requests on this address and serves either a transparent pixel graphic, or else an empty HTML file. Thus, any page or graphic on any of the bad sites is replaced with nothing (in other words an ad/malware/porn/etc. blocker). This works great as is (and has been for me for years now). The problem is that these bad things are no longer limited to just HTTP traffic. For example: <script src="http://NastyAdServer.com:99"> or <iframe src="https://PornAds.com/ad.html"> or a Trojan using ftp://spammaster.com/[email protected];[email protected];[email protected] or an app “phoning home” with private info in a crafted ICMP packet by pinging CardStealer.ru:99 Handling HTTPS is a relatively minor bump. I can create a separate VirtualHost just like the one above, replacing port 80 with 443, and adding in SSL directives. This leaves the other ports to be dealt with. I tried using * for the port, but then I get overlap errors. I tried redirecting all request to the HTTPS server and visa-versa but neither worked; either the SSL requests wouldn’t redirect correctly or else the HTTP requests gave the You’re speaking plain HTTP to an SSL-enabled server port… error. Further, I cannot figure out a way to test if other ports are being successfully redirected (I could try using a browser, but what about FTP, ICMP, etc.?) I realize that I could just use a port-blocker (eg ProtoWall, PeerBlock, etc.), but there’s two issues with that. First, I am blocking domains with this method, not IP addresses, so to use a port-blocker, I would have to get each and every domain’s IP, and update theme frequently. Second, using this method, I can have Apache keep logs of all the ad/malware/spam/etc. requests for future analysis (my current AdKiller logs are already 466MB right now). I appreciate any help in successfully setting up an Apache VirtualHost blackhole. Thanks.

    Read the article

  • Restarting or stopping apache results in waiting forever

    - by steko
    I have two simple WSGI apps running on top of mod_wsgi and apache2 on a test development server. There is no mod_python on this machine. The WSGI configuration is as follows WSGIDaemonProcess tops stack-size=524288 maximum-requests=5 WSGIScriptAlias /tops /home/ubuntu/tops-cloud/tops.wsgi <Directory /home/ubuntu/tops-cloud> WSGIProcessGroup tops WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> WSGIDaemonProcess flaskal maximum-requests=5 WSGIScriptAlias /c14 /home/ubuntu/c14/flaskal/flaskal.wsgi <Directory /home/ubuntu/c14/flaskal> WSGIProcessGroup flaskal WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> If I make changes to the app, I need to restart the web server, so I would expect that a simple sudo service apache2 restart does what I need. Same goes for any changes to the config (e.g. number of maximum requests, etc). Instead, it never ends "waiting", like this: $ sudo service apache2 restart * Restarting web server apache2 ... waiting .................................................. until I just do CTRL-C. At that point, the only way to resume a working server is to kill the process and restart it, not very convenient. The same happens with the stop command. The error logs at the "debug" level show the following lines after a failed restart [Wed Nov 14 21:55:19 2012] [notice] caught SIGTERM, shutting down [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Shutdown requested 'tops'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Stopping process 'tops'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Destroying interpreters. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Cleanup interpreter ''. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Terminating Python. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Shutdown requested 'flaskal'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Stopping process 'flaskal'. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Destroying interpreters. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Cleanup interpreter ''. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Terminating Python. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=8920): Python has shutdown. [Wed Nov 14 21:55:19 2012] [info] mod_wsgi (pid=9047): Python has shutdown. If I then try to restart again (with the process still running), I get the following error: * Restarting web server apache2 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed. The Apache error log may have more information. Unfortunately the Apache error log doesn't have anything. When apache2 is running properly, both apps work without any problem.

    Read the article

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • Why can't we reach some (but not all) external web service via VPN connection?

    - by Paul Haldane
    At work (UK university) we use a set of Windows servers running WS2008R2 and RRAS which offer VPN service to students in our accommodation. We do this to associate the network connections with individuals. Before they've connected to the VPN all they can talk to is the stuff thats needed to setup the VPN and a local web site with documentation on how to connect. Medium term we'll probably replace this but it's what we're using at the moment. VPN on the 2008 servers allocates client a private (10.x) address. Access to external sites is through NAT on the campus routers (same as any other directly connected client on a private address). Non-VPN connections aren't seeing this problem. Older servers run WS 2003 and ISA2004. That setup works but has become unreliable under load. Big difference there was that we were allocating non-RFC1918 addresses to the clients (so no NAT required). Behaviour we're seeing is that once connected to the VPN, clients can reach local web sites (that is sites on the campus network) but only some external sites. It seems (but this may be chance) that the sites we can reach are Google ones (including YouTube). We certainly have trouble reaching Microsoft's Office 365 service (which is a pain because that's where mail for most of our students is). One odd bit of behaviour is that clients can fetch (using wget on a Windows 7 client) http://www.oracle.com/ (which gets a 301 redirect) but hangs when asked to fetch http://www.oracle.com/index.html (which is what the first URL redirects to). Access works reliably if we configure clients to use our local web proxies (Squid). My gut tells me that this is likely to be something in the chain dropping replies either based on HTTP inspection or the IP address in the reply. However I'm puzzled about why we're seeing this with the VPN clients. Plan for tomorrow (when I'm back in the office) is to setup a web server on external connection so that we can monitor behaviour at both ends of the conversation (hoping that the problem manifests itself with our test server). Any suggestions for things we should be looking at?

    Read the article

  • Wifi network stopped being visible (and usable) (Linksys wag320n)

    - by s427
    Basically, my wifi network simply stopped working for no apparent reason. It doesn't appear in the list of the available networks anymore. I can see all my neighbors' networks, but not mine. It's as if it doesn't exist anymore. The internet connection (non-wifi), which goes through the same modem/router, is fine though. I already had a similar problem about one year ago (see here: Wifi network SSID not visible ), just after buying this very modem. I finally got it to work after performing two factory resets and getting rid of the Cisco "Magic" software; but this time it's not working. I use a linksys router-modem (WAG320N) which is directly connected (via network cable) to my desktop computer (Windows 7). I have (mainly) two devices that use the wifi network: my phone (Samsung Galaxy Nexus) and an Asus tablet (TF201, aka Transformer Prime). I also resurrected an old laptop computer (Dell, running Windows XP) to test that, and it doesn't see anything either (apart from the 20 other wifi networks, of course ^^). This wifi network was working just fine and has been for about a year. I haven't touched the modem settings so I have no idea what's causing the problem. I tried: making my phone "forget" about my network, hoping it would see it again after that: no luck. re-entering the network informations (SSID/password) manually on my phone: still no luck (says it's not in range) exporting the modem configuration, resetting the modem (factory reset, via modem admin), restarting it, importing the configuration: nope. factory reset, turning it off for 15 minutes, restarting, re-factory reset, and entering the configuration manually: still nothing. Has anybody experienced something similar before? Have you any suggestion to fix that? Thanks in advance. PS: to clear things up, here are the settings of my modem regarding wifi: Basic wireless settings: Configuration: manual Radio Band: 2.4GHz Wireless Network Mode: B/G/N-Mixed SSID: s427 Channel Bandwidth: Wide - 40 MHz Channel Wide Channel: 9 - 2.452GHz Standard Channel: 11 - 2.462GHz SSID Broadcast: Enable Advanced Wireless Settings AP Isolation: Disable Authentication Type: Auto Basic Rate: Default Transmission Rate: Auto N Transmission Rate: Auto CTS Protection Mode: Disable Beacon Interval: 100 DTIM Interval: 1 Fragmentation Threshold: 2346 RTS Threshold: 2346

    Read the article

  • How can I minimize the amount my router slows down my Internet connection speed?

    - by Lord Torgamus
    Background I'm working with what I assume is a pretty common Internet setup: a cable modem, a wireless router and a few Internet-connected devices. Lately, I've started being more demanding on my Internet connection, and noticed that using my router slows down my download speeds considerably. I just kind of dealt with it until Zune Marketplace on the Xbox 360 told me that a movie was going to take well over ten hours to download, and I just didn't want to wait that long. Good little scientist that I am, I tried to reduce the problem down to one variable. The test As a control, I turned off all the devices in the house that use wireless Internet, and unplugged all the wired devices except for the Xbox. I also power-cycled both the modem and the router. I then tried to download the movie again, and was told that it would still take over ten hours. Next, I unplugged the router, and connected the Xbox directly to the modem. The movie downloaded in just over one hour. As far as I can tell, this means that my ISP, other cable users near me, the remote servers, anything wireless-related and my machines' disk speeds can't be at fault. A similar experiment that replaced the Xbox with a wired laptop produced similar results. To me, this says "the router is responsible for things taking around ten times longer to download." My question I'd still prefer to use the router for a few reasons: it's a pain to connect and disconnect everything every time there's a big file to download direct connection to the modem isn't good for security only one machine can be connected directly to the modem at a time What can I do to have fast connection speeds while still using the router? I don't mind turning other machines off, as long as I don't have to mess with power and ethernet cables. EDIT : After asking this followup question and then this one, I installed dd-wrt on my router, and I seem to be getting higher and more consistent speeds. Perhaps more importantly, my memory use is fairly constant. I know this isn't an answer — which is why I'm not posting it as an answer — but it is how I resolved the situation, and hopefully it'll be helpful for someone.

    Read the article

  • Windows Update and IE fail to connect, but Chrome fine?

    - by I Gottlieb
    Out of ideas on this one. (Running Windows Vista.) I have a program that accesses the internet to retrieve financial market data. One day it tells me that it can't log in -- timeout error. I check the documentation and it says must have a working copy of IE browser installed. I check IE (have IE9) and sure enough -- it just spins. No error message, not timeout, no 'try later' -- just spins -- as far as I can tell, indefinitely. Any page, any address. Even access to a localhost site just spins. Chrome works fine. So does another program I have that fetches market data. Windows 'diagnose and repair' says my internet connection is working fine. I tried uninstall/re-install of IE. Same spinning. I tried to install Windows Updates, and guess what? I can't. I comes up with error 80072efd; checked documentation for the error and it says I should check firewall blockage. Thing is, the only firewall I have is Windows Firewall, and obviously it wouldn't be blocking Windows Update. In contrast, Windows 'Help' in all programs has no problem accessing the Internet. I had a filter on the internet connection, and this was updated just prior to first appearance of the problem. But I uninstalled the filter entirely (official, with passwd from the company's service rep) -- and no difference. I'm guessing that a high level Windows network service file is corrupted -- used only by MS programs and their ilk, but how do I find it? I'd like to avoid having to do a clean install of Windows. Much obliged for any insight. IG Ramhound -- Thanks for reply. I'm familiar with virtual machines as in e.g. JVM or an emulator for an alternative architecture or (theoretical) Turing Machine equivalence. But I'm not familiar with the way you're using the term. Please clarify -- what one needs for this VM 'test' and why you expect it will provide an advantage of insight into the problem. And what sort of 'configuration issue' are you referring to? IG

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How To Switch Back to Outlook 2007 After the 2010 Beta Ends

    - by Matthew Guay
    Are you switching back to Outlook 2007 after trying out Office 2010 beta?  Here’s how you can restore your Outlook data and keep everything working fine after the switch. Whenever you install a newer version of Outlook, it will convert your profile and data files to the latest format.  This makes them work the best in the newer version of Outlook, but may cause problems if you decide to revert to an older version.  If you installed Outlook 2010 beta, it automatically imported and converted your profile from Outlook 2007.  When the beta expires, you will either have to reinstall Office 2007 or purchase a copy of Office 2010. If you choose to reinstall Office 2007, you may notice an error message each time you open Outlook. Outlook will still work fine and all of your data will be saved, but this error message can get annoying.  Here’s how you can create a new profile, import all of your old data, and get rid of this error message. Banish the Error Message with a New Profile To get rid of this error message, we need to create a new Outlook profile.  First, make sure your Outlook data files are backed up.  Your messages, contacts, calendar, and more are stored in a .pst file in your appdata folder.  Enter the following in the address bar of an Explorer window to open your Outlook data folder, and replace username with your user name: C:\Users\username\AppData\Local\Microsoft\Outlook Copy the Outlook Personal Folders (.pst) files that contain your data. Its name is usually your email address, though it may have a different name.  If in doubt, select all of the Outlook Personal Folders files, copy them, and save them in another safe place (such as your Documents folder). Now, let’s remove your old profile.  Open Control Panel, and select Mail.  In Windows Vista or 7, simply enter “Mail” in the search box and select the first entry. Click the “Show Profiles…” button. Now, select your Outlook profile, and click Remove.  This will not delete your data files, but will remove them from Outlook. Press Yes to confirm that you wish to remove this profile. Open Outlook, and you will be asked to create a new profile.  Enter a name for your new profile, and press Ok. Now enter your email account information to setup Outlook as normal. Outlook will attempt to automatically configure your account settings.  This usually works for accounts with popular email systems, but if it fails to find your information you can enter it manually.  Press finish when everything’s done. Outlook will now go ahead and download messages from your email account.  In our test, we used a Gmail account that still had all of our old messages online.  Those files are backed up in our old Outlook data files, so we can save time and not download them.  Click the Send/Receive button on the bottom of the window, and select “Cancel Send/Receive”. Restore Your Old Outlook Data Let’s add our old Outlook file back to Outlook 2007.  Exit Outlook, and then go back to Control Panel, and select Mail as above.  This time, click the Data Files button. Click the Add button on the top left. Select “Office Outlook Personal Folders File (.pst)”, and click Ok. Now, select your old Outlook data file.  It should be in the folder that opens by default; if not, browse to the backup copy we saved earlier, and select it. Press Ok at the next dialog to accept the default settings. Now, select the data file we just imported, and click “Set as Default”. Now, all of your old messages, appointments, contacts, and everything else will be right in Outlook ready for you.  Click Ok, and then open Outlook to see the change. All of the data that was in Outlook 2010 is now ready to use in Outlook 2007.  You won’t have to wait to re-download all of your emails from the server since everything’s still here ready to be used.  And when you open Outlook, you won’t see any error messages, either! Conclusion Migrating your Outlook profile back to Outlook 2007 is fairly easy, and with these steps, you can avoid seeing an error message every time you open Outlook.  With all your data in tact, you’re ready to get back to work instead of getting frustrated with Outlook.  Many of us use webmail and keep all of our messages in the cloud, but even on broadband connections it can take a long time to download several gigabytes of emails. Similar Articles Productive Geek Tips Opening Attachments in Outlook 2007 by KeyboardQuickly Create Appointments from Tasks with Outlook 2007’s To-Do BarFix For Outlook 2007 Constantly Asking for Password on VistaPin Microsoft Outlook to the Desktop BackgroundOur Look at the LinkedIn Social Connector for Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • May 20th Links: ASP.NET MVC, ASP.NET, .NET 4, VS 2010, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series and ASP.NET MVC 2 series for other on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET MVC How to Localize an ASP.NET MVC Application: Michael Ceranski has a good blog post that describes how to localize ASP.NET MVC 2 applications. ASP.NET MVC with jTemplates Part 1 and Part 2: Steve Gentile has a nice two-part set of blog posts that demonstrate how to use the jTemplate and DataTable jQuery libraries to implement client-side data binding with ASP.NET MVC. CascadingDropDown jQuery Plugin for ASP.NET MVC: Raj Kaimal has a nice blog post that demonstrates how to implement a dynamically constructed cascading dropdownlist on the client using jQuery and ASP.NET MVC. How to Configure VS 2010 Code Coverage for ASP.NET MVC Unit Tests: Visual Studio enables you to calculate the “code coverage” of your unit tests.  This measures the percentage of code within your application that is exercised by your tests – and can give you a sense of how much test coverage you have.  Gunnar Peipman demonstrates how to configure this for ASP.NET MVC projects. Shrinkr URL Shortening Service Sample: A nice open source application and code sample built by Kazi Manzur that demonstrates how to implement a URL Shortening Services (like bit.ly) using ASP.NET MVC 2 and EF4.  More details here. Creating RSS Feeds in ASP.NET MVC: Damien Guard has a nice post that describes a cool new “FeedResult” class he created that makes it easy to publish and expose RSS feeds from within ASP.NET MVC sites. NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2: Nice two-part blog series by Shiju Varghese on how to use MongoDB (a document database) with ASP.NET MVC.  If you are interested in document databases also make sure to check out the Raven DB project from Ayende. Using the FCKEditor with ASP.NET MVC: Quick blog post that describes how to use FCKEditor – an open source HTML Text Editor – with ASP.NET MVC. ASP.NET Replace Html.Encode Calls with the New HTML Encoding Syntax: Phil Haack has a good blog post that describes a useful way to quickly update your ASP.NET pages and ASP.NET MVC views to use the new <%: %> encoding syntax in ASP.NET 4.  I blogged about the new <%: %> syntax – it provides an easy and concise way to HTML encode content. Integrating Twitter into an ASP.NET Website using OAuth: Scott Mitchell has a nice article that describes how to take advantage of Twiter within an ASP.NET Website using the OAuth protocol – which is a simple, secure protocol for granting API access. Creating an ASP.NET report using VS 2010 Part 1, Part 2, and Part 3: Raj Kaimal has a nice three part set of blog posts that detail how to use SQL Server Reporting Services, ASP.NET 4 and VS 2010 to create a dynamic reporting solution. Three Hidden Extensibility Gems in ASP.NET 4: Phil Haack blogs about three obscure but useful extensibility points enabled with ASP.NET 4. .NET 4 Entity Framework 4 Video Series: Julie Lerman has a nice, free, 7-part video series on MSDN that walks through how to use the new EF4 capabilities with VS 2010 and .NET 4.  I’ll be covering EF4 in a blog series that I’m going to start shortly as well. Getting Lazy with System.Lazy: System.Lazy and System.Lazy<T> are new features in .NET 4 that provide a way to create objects that may need to perform time consuming operations and defer the execution of the operation until it is needed.  Derik Whittaker has a nice write-up that describes how to use it. LINQ to Twitter: Nifty open source library on Codeplex that enables you to use LINQ syntax to query Twitter. Visual Studio 2010 Using Intellitrace in VS 2010: Chris Koenig has a nice 10 minute video that demonstrates how to use the new Intellitrace features of VS 2010 to enable DVR playback of your debug sessions. Make the VS 2010 IDE Colors look like VS 2008: Scott Hanselman has a nice blog post that covers the Visual Studio Color Theme Editor extension – which allows you to customize the VS 2010 IDE however you want. How to understand your code using Dependency Graphs, Sequence Diagrams, and the Architecture Explorer: Jennifer Marsman has a nice blog post describes how to take advantage of some of the new architecture features within VS 2010 to quickly analyze applications and legacy code-bases. How to maintain control of your code using Layer Diagrams: Another great blog post by Jennifer Marsman that demonstrates how to setup a “layer diagram” within VS 2010 to enforce clean layering within your applications.  This enables you to enforce a compiler error if someone inadvertently violates a layer design rule. Collapse Selection in Solution Explorer Extension: Useful VS 2010 extension that enables you to quickly collapse “child nodes” within the Visual Studio Solution Explorer.  If you have deeply nested project structures this extension is useful. Silverlight and Windows Phone 7 Building a Simple Windows Phone 7 Application: A nice tutorial blog post that demonstrates how to take advantage of Expression Blend to create an animated Windows Phone 7 application. If you haven’t checked out my Windows Phone 7 Twitter Tutorial I also recommend reading that. Hope this helps, Scott P.S. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.

    Read the article

  • Rendering ASP.NET MVC Views to String

    - by Rick Strahl
    It's not uncommon in my applications that I require longish text output that does not have to be rendered into the HTTP output stream. The most common scenario I have for 'template driven' non-Web text is for emails of all sorts. Logon confirmations and verifications, email confirmations for things like orders, status updates or scheduler notifications - all of which require merged text output both within and sometimes outside of Web applications. On other occasions I also need to capture the output from certain views for logging purposes. Rather than creating text output in code, it's much nicer to use the rendering mechanism that ASP.NET MVC already provides by way of it's ViewEngines - using Razor or WebForms views - to render output to a string. This is nice because it uses the same familiar rendering mechanism that I already use for my HTTP output and it also solves the problem of where to store the templates for rendering this content in nothing more than perhaps a separate view folder. The good news is that ASP.NET MVC's rendering engine is much more modular than the full ASP.NET runtime engine which was a real pain in the butt to coerce into rendering output to string. With MVC the rendering engine has been separated out from core ASP.NET runtime, so it's actually a lot easier to get View output into a string. Getting View Output from within an MVC Application If you need to generate string output from an MVC and pass some model data to it, the process to capture this output is fairly straight forward and involves only a handful of lines of code. The catch is that this particular approach requires that you have an active ControllerContext that can be passed to the view. This means that the following approach is limited to access from within Controller methods. Here's a class that wraps the process and provides both instance and static methods to handle the rendering:/// <summary> /// Class that renders MVC views to a string using the /// standard MVC View Engine to render the view. /// /// Note: This class can only be used within MVC /// applications that have an active ControllerContext. /// </summary> public class ViewRenderer { /// <summary> /// Required Controller Context /// </summary> protected ControllerContext Context { get; set; } public ViewRenderer(ControllerContext controllerContext) { Context = controllerContext; } /// <summary> /// Renders a full MVC view to a string. Will render with the full MVC /// View engine including running _ViewStart and merging into _Layout /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to render the view with</param> /// <returns>String of the rendered view or null on error</returns> public string RenderView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, false); } /// <summary> /// Renders a partial MVC view to string. Use this method to render /// a partial view that doesn't merge with _Layout and doesn't fire /// _ViewStart. /// </summary> /// <param name="viewPath"> /// The path to the view to render. Either in same controller, shared by /// name or as fully qualified ~/ path including extension /// </param> /// <param name="model">The model to pass to the viewRenderer</param> /// <returns>String of the rendered view or null on error</returns> public string RenderPartialView(string viewPath, object model) { return RenderViewToStringInternal(viewPath, model, true); } public static string RenderView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderView(viewPath, model); } public static string RenderPartialView(string viewPath, object model, ControllerContext controllerContext) { ViewRenderer renderer = new ViewRenderer(controllerContext); return renderer.RenderPartialView(viewPath, model); } protected string RenderViewToStringInternal(string viewPath, object model, bool partial = false) { // first find the ViewEngine for this view ViewEngineResult viewEngineResult = null; if (partial) viewEngineResult = ViewEngines.Engines.FindPartialView(Context, viewPath); else viewEngineResult = ViewEngines.Engines.FindView(Context, viewPath, null); if (viewEngineResult == null) throw new FileNotFoundException(Properties.Resources.ViewCouldNotBeFound); // get the view and attach the model to view data var view = viewEngineResult.View; Context.Controller.ViewData.Model = model; string result = null; using (var sw = new StringWriter()) { var ctx = new ViewContext(Context, view, Context.Controller.ViewData, Context.Controller.TempData, sw); view.Render(ctx, sw); result = sw.ToString(); } return result; } } The key is the RenderViewToStringInternal method. The method first tries to find the view to render based on its path which can either be in the current controller's view path or the shared view path using its simple name (PasswordRecovery) or alternately by its full virtual path (~/Views/Templates/PasswordRecovery.cshtml). This code should work both for Razor and WebForms views although I've only tried it with Razor Views. Note that WebForms Views might actually be better for plain text as Razor adds all sorts of white space into its output when there are code blocks in the template. The Web Forms engine provides more accurate rendering for raw text scenarios. Once a view engine is found the view to render can be retrieved. Views in MVC render based on data that comes off the controller like the ViewData which contains the model along with the actual ViewData and ViewBag. From the View and some of the Context data a ViewContext is created which is then used to render the view with. The View picks up the Model and other data from the ViewContext internally and processes the View the same it would be processed if it were to send its output into the HTTP output stream. The difference is that we can override the ViewContext's output stream which we provide and capture into a StringWriter(). After rendering completes the result holds the output string. If an error occurs the error behavior is similar what you see with regular MVC errors - you get a full yellow screen of death including the view error information with the line of error highlighted. It's your responsibility to handle the error - or let it bubble up to your regular Controller Error filter if you have one. To use the simple class you only need a single line of code if you call the static methods. Here's an example of some Controller code that is used to send a user notification to a customer via email in one of my applications:[HttpPost] public ActionResult ContactSeller(ContactSellerViewModel model) { InitializeViewModel(model); var entryBus = new busEntry(); var entry = entryBus.LoadByDisplayId(model.EntryId); if ( string.IsNullOrEmpty(model.Email) ) entryBus.ValidationErrors.Add("Email address can't be empty.","Email"); if ( string.IsNullOrEmpty(model.Message)) entryBus.ValidationErrors.Add("Message can't be empty.","Message"); model.EntryId = entry.DisplayId; model.EntryTitle = entry.Title; if (entryBus.ValidationErrors.Count > 0) { ErrorDisplay.AddMessages(entryBus.ValidationErrors); ErrorDisplay.ShowError("Please correct the following:"); } else { string message = ViewRenderer.RenderView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); string title = entry.Title + " (" + entry.DisplayId + ") - " + App.Configuration.ApplicationName; AppUtils.SendEmail(title, message, model.Email, entry.User.Email, false, false)) } return View(model); } Simple! The view in this case is just a plain MVC view and in this case it's a very simple plain text email message (edited for brevity here) that is created and sent off:@model ContactSellerViewModel @{ Layout = null; }re: @Model.EntryTitle @Model.ListingUrl @Model.Message ** SECURITY ADVISORY - AVOID SCAMS ** Avoid: wiring money, cross-border deals, work-at-home ** Beware: cashier checks, money orders, escrow, shipping ** More Info: @(App.Configuration.ApplicationBaseUrl)scams.html Obviously this is a very simple view (I edited out more from this page to keep it brief) -  but other template views are much more complex HTML documents or long messages that are occasionally updated and they are a perfect fit for Razor rendering. It even works with nested partial views and _layout pages. Partial Rendering Notice that I'm rendering a full View here. In the view I explicitly set the Layout=null to avoid pulling in _layout.cshtml for this view. This can also be controlled externally by calling the RenderPartial method instead: string message = ViewRenderer.RenderPartialView("~/views/template/ContactSellerEmail.cshtml",model, ControllerContext); with this line of code no layout page (or _viewstart) will be loaded, so the output generated is just what's in the view. I find myself using Partials most of the time when rendering templates, since the target of templates usually tend to be emails or other HTML fragment like output, so the RenderPartialView() method is definitely useful to me. Rendering without a ControllerContext The preceding class is great when you're need template rendering from within MVC controller actions or anywhere where you have access to the request Controller. But if you don't have a controller context handy - maybe inside a utility function that is static, a non-Web application, or an operation that runs asynchronously in ASP.NET - which makes using the above code impossible. I haven't found a way to manually create a Controller context to provide the ViewContext() what it needs from outside of the MVC infrastructure. However, there are ways to accomplish this,  but they are a bit more complex. It's possible to host the RazorEngine on your own, which side steps all of the MVC framework and HTTP and just deals with the raw rendering engine. I wrote about this process in Hosting the Razor Engine in Non-Web Applications a long while back. It's quite a process to create a custom Razor engine and runtime, but it allows for all sorts of flexibility. There's also a RazorEngine CodePlex project that does something similar. I've been meaning to check out the latter but haven't gotten around to it since I have my own code to do this. The trick to hosting the RazorEngine to have it behave properly inside of an ASP.NET application and properly cache content so templates aren't constantly rebuild and reparsed. Anyway, in the same app as above I have one scenario where no ControllerContext is available: I have a background scheduler running inside of the app that fires on timed intervals. This process could be external but because it's lightweight we decided to fire it right inside of the ASP.NET app on a separate thread. In my app the code that renders these templates does something like this:var model = new SearchNotificationViewModel() { Entries = entries, Notification = notification, User = user }; // TODO: Need logging for errors sending string razorError = null; var result = AppUtils.RenderRazorTemplate("~/views/template/SearchNotificationTemplate.cshtml", model, razorError); which references a couple of helper functions that set up my RazorFolderHostContainer class:public static string RenderRazorTemplate(string virtualPath, object model,string errorMessage = null) { var razor = AppUtils.CreateRazorHost(); var path = virtualPath.Replace("~/", "").Replace("~", "").Replace("/", "\\"); var merged = razor.RenderTemplateToString(path, model); if (merged == null) errorMessage = razor.ErrorMessage; return merged; } /// <summary> /// Creates a RazorStringHostContainer and starts it /// Call .Stop() when you're done with it. /// /// This is a static instance /// </summary> /// <param name="virtualPath"></param> /// <param name="binBasePath"></param> /// <param name="forceLoad"></param> /// <returns></returns> public static RazorFolderHostContainer CreateRazorHost(string binBasePath = null, bool forceLoad = false) { if (binBasePath == null) { if (HttpContext.Current != null) binBasePath = HttpContext.Current.Server.MapPath("~/"); else binBasePath = AppDomain.CurrentDomain.BaseDirectory; } if (_RazorHost == null || forceLoad) { if (!binBasePath.EndsWith("\\")) binBasePath += "\\"; //var razor = new RazorStringHostContainer(); var razor = new RazorFolderHostContainer(); razor.TemplatePath = binBasePath; binBasePath += "bin\\"; razor.BaseBinaryFolder = binBasePath; razor.UseAppDomain = false; razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsBusiness.dll"); razor.ReferencedAssemblies.Add(binBasePath + "ClassifiedsWeb.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Utilities.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.dll"); razor.ReferencedAssemblies.Add(binBasePath + "Westwind.Web.Mvc.dll"); razor.ReferencedAssemblies.Add("System.Web.dll"); razor.ReferencedNamespaces.Add("System.Web"); razor.ReferencedNamespaces.Add("ClassifiedsBusiness"); razor.ReferencedNamespaces.Add("ClassifiedsWeb"); razor.ReferencedNamespaces.Add("Westwind.Web"); razor.ReferencedNamespaces.Add("Westwind.Utilities"); _RazorHost = razor; _RazorHost.Start(); //_RazorHost.Engine.Configuration.CompileToMemory = false; } return _RazorHost; } The RazorFolderHostContainer essentially is a full runtime that mimics a folder structure like a typical Web app does including caching semantics and compiling code only if code changes on disk. It maps a folder hierarchy to views using the ~/ path syntax. The host is then configured to add assemblies and namespaces. Unfortunately the engine is not exactly like MVC's Razor - the expression expansion and code execution are the same, but some of the support methods like sections, helpers etc. are not all there so templates have to be a bit simpler. There are other folder hosts provided as well to directly execute templates from strings (using RazorStringHostContainer). The following is an example of an HTML email template @inherits RazorHosting.RazorTemplateFolderHost <ClassifiedsWeb.SearchNotificationViewModel> <html> <head> <title>Search Notifications</title> <style> body { margin: 5px;font-family: Verdana, Arial; font-size: 10pt;} h3 { color: SteelBlue; } .entry-item { border-bottom: 1px solid grey; padding: 8px; margin-bottom: 5px; } </style> </head> <body> Hello @Model.User.Name,<br /> <p>Below are your Search Results for the search phrase:</p> <h3>@Model.Notification.SearchPhrase</h3> <small>since @TimeUtils.ShortDateString(Model.Notification.LastSearch)</small> <hr /> You can see that the syntax is a little different. Instead of the familiar @model header the raw Razor  @inherits tag is used to specify the template base class (which you can extend). I took a quick look through the feature set of RazorEngine on CodePlex (now Github I guess) and the template implementation they use is closer to MVC's razor but there are other differences. In the end don't expect exact behavior like MVC templates if you use an external Razor rendering engine. This is not what I would consider an ideal solution, but it works well enough for this project. My biggest concern is the overhead of hosting a second razor engine in a Web app and the fact that here the differences in template rendering between 'real' MVC Razor views and another RazorEngine really are noticeable. You win some, you lose some It's extremely nice to see that if you have a ControllerContext handy (which probably addresses 99% of Web app scenarios) rendering a view to string using the native MVC Razor engine is pretty simple. Kudos on making that happen - as it solves a problem I see in just about every Web application I work on. But it is a bummer that a ControllerContext is required to make this simple code work. It'd be really sweet if there was a way to render views without being so closely coupled to the ASP.NET or MVC infrastructure that requires a ControllerContext. Alternately it'd be nice to have a way for an MVC based application to create a minimal ControllerContext from scratch - maybe somebody's been down that path. I tried for a few hours to come up with a way to make that work but gave up in the soup of nested contexts (MVC/Controller/View/Http). I suspect going down this path would be similar to hosting the ASP.NET runtime requiring a WorkerRequest. Brrr…. The sad part is that it seems to me that a View should really not require much 'context' of any kind to render output to string. Yes there are a few things that clearly are required like paths to the virtual and possibly the disk paths to the root of the app, but beyond that view rendering should not require much. But, no such luck. For now custom RazorHosting seems to be the only way to make Razor rendering go outside of the MVC context… Resources Full ViewRenderer.cs source code from Westwind.Web.Mvc library Hosting the Razor Engine for Non-Web Applications RazorEngine on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 921 922 923 924 925 926 927 928 929 930 931 932  | Next Page >