Search Results

Search found 10203 results on 409 pages for 'wcf rest starter kit'.

Page 242/409 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Unable to map to web folder using WebDAV client on Windows Server 2008 R2

    - by user74989
    I have a client running Windows Server 2008 R2 on several servers. One of the servers is also running SharePoint 3.0 and my client has created a web folder to map to. I can map to the web folder from all Server 2008 R2 boxes that have the WebDAV client (part of Desktop Experience feature) installed, except for the server the folder resides on. When I attempt to map to the web folder on the server which the folder resides, I am repeatedly prompted to enter my credentials. I am using the same account that I used to map the web folder on the other servers. I have also tried mapping from the command line and receive 'Access Denied' What may be causing the problem? I would think that if I can map to the drive from one server, I should be able to map the drive from the rest as long as the WebDAV client is installed, especially on the server where the folder is located. Jesse

    Read the article

  • Google Chrome Enterprise - Any Gotchas?

    - by AJ
    Has anyone rolled out Google Chrome to a medium / large organisation? I would like to suggest it to our management (because I think it would work very nicely with some of our intranet applications), and I would like to find out what problems (if any) the rest of the world has been experiencing with it. Have you found any problems? I'm thinking of enterprise-level problems. I'm thinking that we can solve anything that requires a specific configuration / proxy setting / etc. I don't really know what I think might be a problem, but I wonder if there are any usability problems that occur when non-geeks use it? Or problems which only rear their ugly heads when you've got 50 users all doing something unexpected. Any helpful information or suggestions would be appreciated. Thanks. UPDATED: We tend to use Microsoft stuff, so Sharepoint, IIS, SQL Server, are typical building blocks of internal sites. (Thanks, @Jim, for reminding me to mention that).

    Read the article

  • Creating an app pool a month to limit the scope of issues

    - by user39550
    I have about 360 sites running on a single app pool. Now I know we have a coding issue with one of those sites, were we have accidentally coded a memory leak. So what happens is the site runs, the memory leak starts and soon the app pool runs out of memory. Then slowly but surely, the rest of the 360 sites start going down like a domino affect. I understand that the root of the problem is some bad coding, which we'll fix, but instead of bringing down said 360 sites, I was thinking, we could create a new app-pool monthly that every site we create would go into that months app pool. First, that limit the scope of the issues to 5 - 20 sites and second if one site started having issues we wouldn't be bringing down all 360 sites. Is there any issues to this thinking, possible ramifications? Thanks in Advance! Jeremiah

    Read the article

  • Using a DataSet instead of custom business entities in soa and n-tier architecture

    - by kathy
    I’m working on a large and a high volume transactional enterprise application which has been designed using n-tire application architecture .And it was developed in the .NET platform utilizing C#,VB.NEt, Framework 3.5, ObjectDataSources, DataSet, WCF, asp.net update panel, JavaScript ,JSON, 3rd Party tools. The application is supposed to accomplish a really scalable / easily maintained / robust application / integrations, and to make sure that my services are created using a format that can be understood by other systems. The problem is, this application is about 70% complete but now I was wondering if the following would cause us future issues, I’m using a DataSet and a DataTable to (get /set) the data (form /to) the stored procedure in the database using the ObjectDataSources and was wondering if this would prevent my application from achieving the above goals. Actually, I am not anti-OO. I write lots of classes for different purposes, but I didn’t use the entity objects(custom business entities) instead of the previous way because I have a large database that may contain 50 tables and I was just afraid to create entities for each table and then in the future if I need to change the schema of the database, it might cause a huge affect on the application ?

    Read the article

  • zram trimming by writing zero pages

    - by qdot
    I'm using ZRAM as a backing block device for /tmp filesystem in the following manner: echo 8000000000 > /sys/block/zram0/disksize mkfs.ext4 -O dir_nlink,extent,extra_isize,flex_bg,^has_journal,uninit_bg -m0 \ -b 4096 -L "zram0" /dev/zram0 mount -o barrier=0,commit=240,noatime,nodev,nosuid /dev/zram0 /tmp chmod aogu+rwx /tmp It works out reasonably well for me - however, there is an issue here - when files are removed, they are not zero'ed, so the ZRAM does not remote the compressed pages. Obviously running dd if=/dev/zero of=/tmp/ZERO bs=1M count={free-space-some-rest}; rm /tmp/ZERO clears it up in the ZRAM - it gets notified of zero-pages and shrinks the store. How can I get ext4 to zero used pages on delete? Also, any other suggestions on how to optimize it?

    Read the article

  • Windows 2003 DNS updates from ISC DHCP server

    - by wolfgangsz
    We have a very mixed network, with most clients being Debian Lenny, the rest Windows XP/Vista/7. The network itself is split into two segments (for technical reasons) called "corporate" and "engineering". On the "corporate" side all clients get their IP addresses from a Windows DHCP server and the dynamic updates into the Windows DNS work just fine. On the "engineering" side, clients get their IP addresses from a linux machine running the standard ISC DHCP server. Although this server is configured to do dynamic DNS updates, they actually don't work. Anybody got any advice on how to fix this? Please note: dynamic updates from the clients directly into the DNS would work, but are not an option for us. So this is strictly on how make this work from an ISC DHCP server to a Windows DNS server.

    Read the article

  • http, https and ftp is not working but smtp and imap is working

    - by Unicron
    hi all, yesterday on a computer of a friend a strange thing happened. after booting the ports fo http, https and ftp are closed but e-mail is still working. in the control panel the windows firewall seems active even if he tries to deactivate it. i have a suspision that it is the faul of norton internet security 2010, we have tried to uninstall it, but the uninstallation did not work. when using the removal tool from symantec it just goes to 23% and then it crashes. the process ccSvcHst.exe is still running. how can i safeley remove the rest of norton internet security? thanks in advance [edit] norton internet security 2010 is sucesfully removed, but still no connectivity

    Read the article

  • merge values from excel file into .html file opened in word 2007

    - by Kelbizzle
    I have this newsletter I've written. I want to be able to use the values in the rows and some how have them merged into this html file I have opened in word. Sort of like a mail merge. In the newsletter I have 3 urls that look like: www.mydomain.com/php?id= I want to be able to replace all of the urls for all 230 records in the excel file. With something like: www.mydomain.com/?id=$id Where $id would get replaced with the id of the record. And the same goes for the rest of the rows like $firstname $lastname $email $phone number Is there a simple way to do this?

    Read the article

  • Should I enabled 802.3x hardware flow control?

    - by Stu Thompson
    What is the conventional wisdom regarding 802.3x flow control? I'm setting up a network at a new colo and am wondering if I should be enabling it or not. My oh-cool-a-bright-and-shiny-new-toy self wants to enable it, but this seems like one of those decisions that could blow up in my face later on. My network: An HP ProCurve 2510G-24 switch A pair of Debian 5 HP DL380 G5's with built-in NC373i 2-port NIC LACP'd as one link. 9000 jumbo frames enabled. (Application) A pair of hand-built Ubuntu server with 4-port Intel Pro/1000 LACP'd as one link. 9000 jumbo frames enabled. (NAS) A few other servers with with single 1Gbps ports, but one with 100Mbps. Most of this kit is 802.3x. I've been enabling it as I go along, and am about to test the network. But as my 'go live' day nears, I am worried about the 802.3x decision as I've never explicitly used it before. Also, I've read some 10-year old articles out there on the Intertubes that warn against using flow control. Should I be enabling 802.3x hardware flow control?

    Read the article

  • active directory servers synchronization

    - by Mit Naik
    I have 3 AD servers with windows server 2008 R2 at 3 different places, main server is at datacenter and 2 are in our local office which are at 2 different place. I want to synchornize all the 3 server together, were datacenter server should be central server and rest 2 servers should synch with the datacenter server. Please provide us the steps or tutorial to do this. Also we want that once the changes are done in 1 of the AD server the changes are automatically done in all the servers. For example if I change the password of user in our local server it should be updated in our main AD server and other branch server too. Please provide us the steps or tutorial to do this asap. I have one more question I have already created main datacenter AD as domain.local and other domains as xyz.local and abc.local, how can I replicate the additional AD domains with main datacenter DC, also do we require VPN connection, is there any other way to replicate the servers without using VPN connection?

    Read the article

  • IIS FTP service - download timeouts and restarts getting the data twice

    - by accel229
    We have an IIS FTP site on a Windows Server 2003 x64 machine. Application Layer Gateway service is disabled (so http://support.microsoft.com/kb/931130 does not apply). Windows Firewall service is disabled as well. Connection timeout for the FTP site (there is only one) is set to 1,200 seconds = 20 minutes. An external client can connect to the site, list directory contents and download small files. When a client attempts to download a large file (eg, if the download continues for 3 minutes, which is still under 20 minutes, but relatively long), the server sends all data, then the connection times out, the client issues REST / RETR commands attempting to restart the download since after the last byte (which I believe should succeed and receive exactly 0 bytes), and the server behaves as if the client tried to restart after byte 0, that is, it sends the entire file all over. Any ideas on how to fix this?

    Read the article

  • 802.11N Windows XP Clients Unable to See Each Other

    - by zippy_raggle
    I am attempting to update my wireless network from 802.11g to 802.11n. When connected to the 'G' access point, the client laptops (there are 7 running Windows XP) are able to connect and browse the network for each other. When I connect them to the 'N' access point they can see the access point, but not the rest of the network. I tried swapping out the access point with a wireless router, but this did not change anything. I verified in both the AP and the router that isolation was turned off. Searching the web has not turned up any other ideas. The manuals don't show anything either. Why can't my wireless client nodes see each other on the 'N' network?

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • What is the command to check if a command's results mention OK?

    - by Manuel
    Alright, so I was playing around with changing MTU size and wanted to make a batch file to automatically lower it and then raise it later. This is probably simple, but I just can't figure it out. Point is, is there a way to run a command, which would normally echo out "ok" but check to see if it does say ok? And if it doesn't say ok then, to end the rest of the file from running and exit out. The command I'm using is netsh interface ipv4 set subinterface "Local Area Connection" mtu=386 store=persistent which, as I mentioned above prints out an OK. I just want to check if it did run correctly, and if not, then do __

    Read the article

  • Xorg eating up too much RAM on Ubuntu 9.10 box

    - by Yang
    Xorg is eating up 444MB of 2GB total RAM on my Ubuntu 9.10 x86_64 machine with nvidia drivers installed for the nvidia G86 (GeForce 8300 GS). top shows: top - 18:21:41 up 6 days, 2:40, 9 users, load average: 0.46, 1.12, 1.22 Tasks: 266 total, 3 running, 262 sleeping, 1 stopped, 0 zombie Cpu(s): 8.4%us, 2.0%sy, 0.0%ni, 89.1%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 2055736k total, 1965136k used, 90600k free, 3952k buffers Swap: 979924k total, 979908k used, 16k free, 102636k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1432 root 20 0 1154m 442m 7492 S 8 22.0 32:56.97 Xorg 18462 yang 20 0 1001m 219m 8356 S 0 10.9 5:13.25 chrome 24099 yang 20 0 865m 83m 13m S 0 4.2 0:06.91 chrome xrestop shows: xrestop - Display: :0.0 Monitoring 47 clients. XErrors: 0 Pixmaps: 40430K total, Other: 142K total, All: 40573K total res-base Wins GCs Fnts Pxms Misc Pxm mem Other Total PID Identifier 1c00000 21 46 1 19 697 9128K 18K 9146K 3169 x-nautilus-desktop 1000000 4 3 0 17 194 9000K 4K 9004K 3134 gnome-settings-daemon 1600000 51 2 1 25 1100 7648K 28K 7676K ? compiz For comparison, here's my other Ubuntu box, which also has compiz etc. enabled but with ATI RV370 (Radeon X300SE): top - 18:18:18 up 58 days, 4:27, 9 users, load average: 0.00, 0.00, 0.00 Tasks: 224 total, 1 running, 223 sleeping, 0 stopped, 0 zombie Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 98.8%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024964k total, 987124k used, 37840k free, 247012k buffers Swap: 2048276k total, 94296k used, 1953980k free, 264744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24324 yang 20 0 61936 35m 6364 S 0 3.5 4:35.84 nxagent 1768 ntop 20 0 190m 32m 5388 S 1 3.2 283:36.15 ntop 1178 root 20 0 60588 29m 1788 S 0 3.0 5:48.89 console-kit-dae ... 1315 root 20 0 343m 4956 4020 S 0 0.5 3:43.87 Xorg Any ideas on how to get to the bottom of this? (i.e. not "Log out"/"Reboot") Thanks in advance.

    Read the article

  • Firefox 3.5.6 Uninstaller not working

    - by Wesley
    Hi all, I want to uninstall Firefox 3.5.6 in favor of Opera 10.10. However, I've went into Control Panel Add or Remove Programs and tried clicking Remove from there, but nothing happens. I go into the Program Files for FIrefox and find the Uninstaller folder and click the helper.exe found in there. Still, nothing happens. Is there some possible way to uninstall Firefox without screwing up the rest of my computer? Computer is a EMACHINES T2482: http://emachines.com/support/product_support.html?cat=Desktops&subcat=T%20Series&model=T2482 RAM has been upgraded to 1 GB PC3200 DDR, GPU has been upgraded to 128MB GeForce 6200, HDD has been upgraded to 160 GB, PSU has been upgraded to 350W and Floppy and DVD Reader have been disconnected from mobo and power. OS is XP Pro SP3

    Read the article

  • Schema Inheritance in BizTalk Server

    - by newbtdev
    Hi, I just wondering if anyone has already tried of doing something like schema inheritance in BizTalk schemas? I am using WCF Adapter and using 'consume adapter service' to generate a schema automatically, what I wanted is instead of always generating a schema and since most of my schema is the same then I want to have something like a base schema. I have this scenario that I'm testing flat file debatching, for debatching I need to set maxoccur property of the schema to '1' but for batch processing it should be '*', instead of creating a two different schemas I want just to create a base schema and inherit from it and then change the maxoccur property in the derived schema. Any help would be appreciated. Many Thanks

    Read the article

  • "Show hidden files" option is not working

    - by crazygamer
    OK, I know this is the very basic thing that goes with Windows, but I am asking it here in search of answer. I put my pendrive and it autoruns. This changed the show hidden files option off, I mean I am not able to see my hidden files as it is not applying the changes. What is the registry file that has modified? I have scanned my computer using 4 antivirus programs. BitDefender found and deleted something in temperary folder. The rest didn't showed anything. I have encountered this problem a few more times but this time I don't want to format it ;-)

    Read the article

  • Test Driven Development For Complex Methods involving external dependency

    - by bill_tx
    I am implementing a Service Contract for WCF Service. As per TDD I wrote a test case to just pass it using hardcoded values. After that I started to put real logic into my Service implementation. The actual logic relies on 3-4 external service and database. What should I do to my original test case that I wrote ? If i Keep it same in order to make test pass it will have to call several other external services. So I have question in general what should I do if I write a test case for a Business Facade first using TDD and later when I add real logic, if it involves external dependency.

    Read the article

  • How come my Apache can't read my media folder, but it can load the site? (static files don't work)

    - by Alex
    Alias /media/ /home/matt/repos/hello/media <Directory /home/matt/repos/hello/media> Options -Indexes Order deny,allow Allow from all </Directory> WSGIScriptAlias / /home/matt/repos/hello/wsgi/django.wsgi /media is my directory. When I go to mydomain.com/media/, it says 403 Forbidden. And, the rest of my site doesn't work because all static files are 404s. Why? The page loads. Just not the media folder. Edit: hello is my project folder. I have tried 777 all my permissions of that folder.

    Read the article

  • Samsung printer (ML-1640) margin problem

    - by Rytis
    I just got a new laser printer, Samsung ML-1640, and it has once again confirmed my experience that Samsung printers suck badly. The printer prints fine, except that the top margin is set at random, usually very small, cutting text on the top of page off. I tried updating firmware, getting new drivers, but nothing helped. I tried changing paper types, and it seemed that with the paper set as "plain" it was OK, but I just tried printing again after letting the printer rest for the night, and wasted 6 pages until I managed to print 2 boarding passes... Is there anything else I could try to resolve the issue?

    Read the article

  • Working of trashcan utility in tru64 Unix server.. or any other utility??

    - by RBA
    Hi, I used this mktrashcan command mktrashcan deleteMe1 trashcan/ And then i Deleted all the contents inside deleteMe1 directory(rm -rf*).. But then what happend is only the two text files which are inside the deleteMe1(deleteMe2.txt, deleteMe3.txt) directory were moved into the trashcan folder.. Rest of the directories and files inside the directories were not foundd!! Isn't there any other way, so that whatever is deleted, moves exactly the same way to the trashcan directory??? Or is there Any Other Utility that can perform the same task but in advance way.. mkdir deleteMe1 mkdir deleteMe1/deleteMe2 mkdir deleteMe1/deleteMe3 touch ./deleteMe1/deleteMe2/deleteMe4.txt touch ./deleteMe1/deleteMe2/deleteMe5.txt touch ./deleteMe1/deleteMe3/deleteMe6.txt touch ./deleteMe1/deleteMe3/deleteMe7.txt touch ./deleteMe1/deleteMe2.txt touch ./deleteMe1/deleteMe3.txt Thankss..

    Read the article

  • How do you mock a Sealed class?

    - by Brett Veenstra
    Mocking sealed classes can be quite a pain. I currently favor an Adapter pattern to handle this, but something about just keeps feels weird. So, What is the best way you mock sealed classes? Java answers are more than welcome. In fact, I would anticipate that the Java community has been dealing with this longer and has a great deal to offer. But here are some of the .NET opinions: Why Duck Typing Matters for C# Develoepers Creating wrappers for sealed and other types for mocking Unit tests for WCF (and Moq)

    Read the article

  • OutOfMemoryException - out of ideas

    - by Captain Comic
    Hi I have a net Windows service that is constantly throwing OutOfMemoryException. The service has two builds for x86 and x64 Windows. However on x64 it consumes a lot more memory. I have tried profiling it with various memory profilers. But I cannot get a clue what the problem is. The diagnosis - service consumes lot of VMSize. Also I tried to look at performance counters (perfmon.exe). What I can see is that heap size is growing and %GC time is 19%. My application has threads and locking objects, DB connections and WCF interface. See first app in list The link to picture with performance counters view http://s006.radikal.ru/i215/1003/0b/ddb3d6c80809.jpg

    Read the article

  • ASP.net dealing issues with production system

    - by nettguy
    First time i am going to work on (maintenance project) application that is already in production.The application was developed in ASP.net 3.5/C# 3.0(web forms) with jQuery,Ajax,Sql server 2005 and microsoft enterprise library 4.0.,WCF services. Questions (bear with me if my question is wrong) 1) Is it possible to use Visual Studio (2008 with SP1) to debug the remote application (i.e already in production)?.What are the tools do i need to use in order to keep track the things in case something went wrong? 2) Simply looking into Log file ,will solve the issues? 3) After having done with enhancements,is it possible to directly deploy the DLLs into production server.Won't it affect the running application? Please guide me what are the procedures i need to follow.Client is ready to provide any tools for my support.(What are the area do i need to aware to handle production system) Thanks in advance.

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >