Search Results

Search found 20592 results on 824 pages for 'anything'.

Page 267/824 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Downloading a file over HTTP the SSIS way

    This post shows you how to download files from a web site whilst really making the most of the SSIS objects that are available. There is no task to do this, so we have to use the Script Task and some simple VB.NET or C# (if you have SQL Server 2008) code. Very often I see suggestions about how to use the .NET class System.Net.WebClient and of course this works, you can code pretty much anything you like in .NET. Here I’d just like to raise the profile of an alternative. This approach uses the HTTP Connection Manager, one of the stock connection managers, so you can use configurations and property expressions in the same way you would for all other connections. Settings like the security details that you would want to make configurable already are, but if you take the .NET route you have to write quite a lot of code to manage those values via package variables. Using the connection manager we get all of that flexibility for free. The screenshot below illustrate some of the options we have. Using the HttpClientConnection class makes for much simpler code as well. I have demonstrated two methods, DownloadFile which just downloads a file to disk, and DownloadData which downloads the file and retains it in memory. In each case we show a message box to note the completion of the download. You can download a sample package below, but first the code: Imports System Imports System.IO Imports System.Text Imports System.Windows.Forms Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() ' Get the unmanaged connection object, from the connection manager called "HTTP Connection Manager" Dim nativeObject As Object = Dts.Connections("HTTP Connection Manager").AcquireConnection(Nothing) ' Create a new HTTP client connection Dim connection As New HttpClientConnection(nativeObject) ' Download the file #1 ' Save the file from the connection manager to the local path specified Dim filename As String = "C:\Temp\Sample.txt" connection.DownloadFile(filename, True) ' Confirm file is there If File.Exists(filename) Then MessageBox.Show(String.Format("File {0} has been downloaded.", filename)) End If ' Download the file #2 ' Read the text file straight into memory Dim buffer As Byte() = connection.DownloadData() Dim data As String = Encoding.ASCII.GetString(buffer) ' Display the file contents MessageBox.Show(data) Dts.TaskResult = Dts.Results.Success End Sub End Class Sample Package HTTPDownload.dtsx (74KB)

    Read the article

  • Is Learning C++ Through The Qt Framework Really Learning C++

    - by user866190
    The problem I have, is that most of the C++ books I read spend almost forever on syntax and the basics of the language, e.g. for and loops while, arrays, lists, pointers, etc. But they never seem to build anything that is simple enough to use for learning, yet practical enough to get you to understand the philosophy and power of the language. Then I stumbled upon QT which is an amazing library! But working through the demos they have, it seems like I am now in the reverse dilemma. I feel like the rich man's son driving round in a sports car subsidized by the father. Like I could build fantastic software, but have no clue what's going on under the hood. As an example of my dilemma take the task of building a simple web browser. In pure C++, I wouldn't even know where to start, yet with the Qt library it can be done within a few lines on code. I am not complaining about this. I am just wondering how to fill the knowledge void between the basic structure of the language and the high level interface that the Qt framework provides?

    Read the article

  • REST: How to store and reuse REST call queries

    - by Jason Holland
    I'm learning C# by programming a real monstrosity of an application for personal use. Part of my application uses several SPARQL queries like so: const string ArtistByRdfsLabel = @" PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> SELECT DISTINCT ?artist WHERE {{ {{ ?artist rdf:type <http://dbpedia.org/ontology/MusicalArtist> . ?artist rdfs:label ?rdfsLabel . }} UNION {{ ?artist rdf:type <http://dbpedia.org/ontology/Band> . ?artist rdfs:label ?rdfsLabel . }} FILTER ( str(?rdfsLabel) = '{0}' ) }}"; string Query = String.Format(ArtistByRdfsLabel, Artist); I don't like the idea of keeping all these queries in the same class that I'm using them in so I thought I would just move them into their own dedicated class to remove clutter in my RestClient class. I'm used to working with SQL Server and just wrapping every query in a stored procedure but since this is not SQL Server I'm scratching my head on what would be the best for these SPARQL queries. Are there any better approaches to storing these queries using any special C# language features (or general, non C# specific, approaches) that I may not already know about? EDIT: Really, these SPARQL queries aren't anything special. Just blobs of text that I later want to grab, insert some parameters into via String.Format and send in a REST call. I suppose you could think of them the same as any SQL query that is kept in the application layer, I just never practiced keeping SQL queries in the application layer so I'm wondering if there are any "standard" practices with this type of thing.

    Read the article

  • How does Comparison Sites work?

    - by Vijay
    Need your thinking on how does these Comparision Sites actually work. Sites like Junglee.com policybazaar.com and there are many like these which provides comaprision of products , fares etc. grabbed from different websites. I had read a little about it and what i found is-: These sites uses Feeds of the sites data. These sites uses APIs of the sites which are actually provided by those sites. And for some sites which do not have any of these two posibility then the Comparision sites uses web-crawler to crawl their data. This is what i have found out. If you think there is more things to it please do give your own views. But i want to know these for my learning purpose and a little for curiosity- how does they actually matches the crawled data , feeds, and other so that there is no duplicacy. What is the process or algorithms for it. And where should i go to learn these concepts. References for books , articles or anything else.

    Read the article

  • Can't connect to website after altering IPTables

    - by user2833135
    I attempted to open up a port on my VPS, but I can't connect to my website after opening up that port. Below are the commands I issued to open the port. I didn't get an error or anything after setting this up. I just can't connect to my website after doing this. [root@vps ~]# iptables -F [root@vps ~]# iptables -A INPUT -i lo -j ACCEPT [root@vps ~]# iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT [root@vps ~]# iptables -A INPUT -p tcp --dport 25765 -j ACCEPT [root@vps ~]# iptables -P INPUT DROP [root@vps ~]# iptables -P FORWARD DROP [root@vps ~]# iptables -P OUTPUT ACCEPT [root@vps ~]# iptables -L -v Just a side note, but I am running CentOS 6 (64 bit) Thank you in advance.

    Read the article

  • ESXi and Windows Server CPU parking

    - by Chris J
    For those that don't know, CPU parking is a feature in recent Windows Server releases that allows Windows to pretty much drop a CPU core to zero use, and having nothing use it. It's been introduced as a power-saving measure. There's more detail about it here, amongst other places. However what I'm curious about is whether this matter on a virtualised guest - or is CPU parking more of a hindrance than a help, given that the physical CPUs are managed by ESXi, not Windows, and that a parked CPU is less likely to deal with traffic unless the scheduler deems there's enough work to unpark the CPU? I've not found anything about this - I do suspect it will be very much based on a given workload, but I've not seen any discussion (unlike, say, whether hyper-threading has any effect, which seems to be discussed regularly). Whilst I do understand the "test with your workload" I was wondering if there was any advice/guidelines out there that I've missed.

    Read the article

  • Disable static content caching in IIS 7

    - by Lee Richardson
    I'm a developer having what should be a relatively simple problem in IIS 7 on Windows Server 2008 R2. The problem is that IIS 7 is overzealously caching all static content on the server. It's caching all .html and .js content and not noticing when the content changes on disk unless I iisreset. I've tried the following: Deleting the local cache in my browser (I'm 99% positive this is a server caching issue) In IIS Admin in OutputCaching adding an .html extension and unchecking "User mode caching" and unchecking "Kernel-mode caching" In IIS Admin in OutputCaching adding an .html extension and checking "User mode caching" and selecting the radio for "Prevent all caching" In IIS Admin editing Output Cache Feature settings and unchecking "Enable cache" and "Enable kernel cache under OutputCaching. Running "C:\Windows\System32\inetsrv\config\appcmd set config "SharePoint - 80" -section: system.webServer/caching -enabled:false" Looking through applicationHost.config and disabling anything related to caching I could find. Nothing seems to work. I'm getting very frustrated. Can anyone please help?

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Cisco VPN Connection - No internet no nothing

    - by Kevin
    Hi all, Sorry if this has been posted, I tried searching but I am not exactly sure what I am looking for, I am a developer not a networking guy. We have a client whom we need to use Cisco VPN client to connect to their servers. I have installed the software, dropped in the provided .pcf file, and I can connect. However, when I do, I lose all local and internet capabilities, no hosts resolve, and I still can't connect to their internal FTP and development sites. This leads me to believe either a setting is wrong in my Cisco software, and/or their network is not correctly configured. Does anyone know anything about Cisco VPN'ing that can give me a hand? My colleague seems to indicate that they need to enable split tunneling on their end (or a similar setting).

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is it illegal to rewrite every line of an open source project in a slightly different way, and use it in a closed source project?

    - by Chris Barry
    There is some code which is GPL or LGPL that I am considering using for an iPhone project. If I took that code (JavaScript) and rewrote it in a different language for use on the iPhone would that be a legal issue? In theory the process that has happened is that I have gone through each line of the project, learnt what it is doing, and then reimplemented the ideas in a new language. To me it seems this is like learning how to implement something, but then reimplementing it separately from the original licence. Therefore you have only copied the algorithm, which arguably you could have learnt from somewhere else other than the original project. Does the licence cover the specific implementation or the algorithm as well? EDIT------ Really glad to see this topic create a good conversation. To give a bit more backing to the project, the code involved does some kind of audio analysis. I believe it is non-trivial to learn or implement, although I was prepared to embark on this task (I'm at the level where I can implement an FFT algorithm, and this was going to go beyond that.) It is a fairly low LOC script, so I didn't think it would be too hard to do a straight port. I really like the idea of rereleasing my port as well as using it in the application. I don't see any problem with that, and it would be a great way to give something back to the community. I was going to add a line about not wanting to discuss the moral issues, but I'm quite glad I didn't as it seems to have fired the debate a bit. I still feel a bit odd about using open source code to learn from. Does this mean that anything one learns from an open source project is not allowed to be used in a closed source project? And how long after or different does an implementation have to be to not be considered violation of the licence? Murky! EDIT 2 -------- Follow up question

    Read the article

  • Umbraco directory permissions | umbPermissions Script

    - by Vizioz Limited
    It has bugged me since I first used Umbraco that if I was doing a manual installation I had to set the directory permissionsI just downloaded a backup of one of my clients Umbraco sites and I was setting up a copy locally and of course I had to set the directory permissions, so I thought there must be a better way!I did a bit of Googling and had a look on the Umbraco forum but I could not find a script to perform this task, then I came across Set ACL on Source Forge and I set about writing my own little script.Save the following script as umbpermissions.bat and save it in the same directory as Set ACLecho offREM Script to setup the Security Permissions for an Umbraco siteREM This script will give your machine Network Service full rights to the appropriate directoriesREM **** Pre-requisites ****REM You will need to download - http://setacl.sourceforge.net/REM **** Usage ****REM You need to pass in the path for the root of your Umbraco directoryREM E.g. umbPermissions.bat C:\inetpub\umbracoroot@echo umbPermissions.bat - Script to set Umbraco File and Directory Permissions@echo Published by Chris Houston - 29th May 2009@echo http://blog.vizioz.comSetACL.exe -on "%1\web.config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\bin" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\css" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\data" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\masterpages" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\scripts" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\umbraco" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\umbraco_client" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\usercontrols" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"SetACL.exe -on "%1\xslt" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:full"Feel free to comment if I missed anything!

    Read the article

  • I removed my-freeze.com NetAssistant, but now can't access two websites

    - by Firefly
    I used "Revo Uninstaller" to uninstall the spyware which left me with a problem using Internet Explorer so then downloaded the free version of "Hijack This" from the website and, not reading the Super User answer correctly, used fix for the general issues it found and saved the log file of the other queries. NetAssistant is completely gone or appears to have - Malwarebytes Malware remover cannot find anything and most Google searches now seem to work correctly. However in removing it I seem to have made an error and now whenever I search for and try to open or try to directly access two sites which I had tried to access via NetAssistant whilst infected IE8 says they cannot be displayed. One of them is Wikipedia and I use both regularly. I am not sure at what point this happened I think it may have been after using Revo Uninstaller and the second section where it looks for references to netassistant (in the registry?). Not sure if this is relevant but I can remember deleting some flags or something relating to Internet Explorer but not sure what. Any suggestions?

    Read the article

  • Incrementing Assembly Version in TFS Builds and its affect over Other Build Definitions

    - by ssmantha
    A very common scenario while performing TFS builds is to increment version number of the assemblies. There are quite a few approaches of which I would like to share two links: Ewald Hofman’s Approach: http://www.ewaldhofman.nl/post/2010/05/13/Customize-Team-Build-2010-e28093-Part-5-Increase-AssemblyVersion.aspx#id_02e7b082-ce95-49a9-92e9-7dc88887b377 Richard Bank’s Approach : http://www.richard-banks.org/2010/07/how-to-versioning-builds-with-tfs-2010.html   Both these approaches work well, however there are scenarios where Editing and Checking–in the Assembly version information can create problems with Build Definitions meant for Continuous Integration, or gated Check-ins. You can suppress the Continuous Integration Builds while checking in the Assembly info file by just putting a comment “***NO_CI***” as specified by Ewald in his blog. However, if you have Gated Checkin in place, this can turn out to be difficult to suppress, I myself tried to suppress the Build Trigger during the check in process but things doesn’t turn out well. That’s where Richard’s solution comes as handy. Both the solutions have their own pros and cons, which I believe can only be experienced over a period of time. In case of Richard’s solution I believe that we don’t have any history of the Assembly Version Info file and when you take latest of the solution the information will be lost. If you notice closely, that suppressing the Continuous Integration (the NO_CI approach in check in comments) is a workaround provided by Microsoft, however I didn’t find anything to suppress the gated Checkin so far. Suggestions or Findings are most welcome.

    Read the article

  • Specify IPSEC port range using ipsec-tools

    - by Sandman4
    Is it possible to require IPSEC on a port range ? I want to require IPSEC for all incoming connections except a few public ports like 80 and 443, but don't want to restrict outgoing connections. My SPD rules would look like: spdadd 0.0.0.0/0 0.0.0.0/0[80] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[443] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[0....32767] tcp -P in esp/require/transport; In setkey manpage I see IP ranges, but no mention of port ranges. (The idea is to use IPSEC as a sort of VPN to protect internal communications between multiple servers. Instead of configuring permissions basing on source IPs, or configuring specific ports, I want to demand IPSEC on anything which is not meant to be public - I feel it's less error-prone this way.)

    Read the article

  • Remove Dell Openmanage from Windows 2008

    - by Erwin Blonk
    I have installed Dell Openmanage Server Agent 4.2.2 on Windows Server 2008. I need the newer version, so I need to install this version first. However, outside some registry references pointing to sources that aren't there, there is little or no trace of it being installed. For example, no trace of the program files or an entry in Prgrams And Features. Still, installing a newer version keeps coming up with an older version that needs to be removed first. When I try to install version 4.2.2 to repair and eventually remove it, it gives an error: Dell Openmanage Server Agent - Error An error was encountered while testing machine type. Failure openingen required handle to .DLL. Dell Openmanage Server Agent cannot continue the installation. Setup will exit now. I haven't found anything using different parts of the error messages as search terms.

    Read the article

  • Need reccomendation for transferring ASP.NET MVC skills to PHP

    - by Tuck
    I am looking to translate my skills in .NET to PHP - specifically in regards to ASP.NET MVC. At work I am currently using .NET MVC 2.0 on a variety of projects and thoroughly enjoy the platform. Specifically I enjoy the very minimal configuration required to get a project up and running (just create the project, define routes, and start coding), as well as the ability for controller actions to return different items (i.e. ActionResult, JsonResult). Another piece I really like is the way the view/model interaction can be handled. For example I like being able to call return View(model) and having a view page (.aspx) load and having the full model object available to the view, regardless of the model type. I'm looking for a PHP implementation of MVC that is the most similiar to what I am already familiar with. I don't anything apart from the MVC functionality. I've looked at Zend, Symfony, CodeIgniter, etc. and, while they look like they'll be fun to play with in the future, they provide much more functionality than I need. I'd prefer to write my own DAL,form helpers, delegate handlers,authentication/ACL pieces, etc. In short, I just need something to handle the routing and view interactions and will worry about the model implementation myself. Can someone please point me to some lightweight code that accomplishes or comes close to accomplishing my objectives above. Or, can someone identify just the portions of a larger framework that do the same (again, I'm not currently interested in implementing something on a big framework, just the MVC portion and want to implement the model portion myself as much as possible). Thanks in advance...

    Read the article

  • Access virtualhosts over LAN (Also in xpmode (Virtual PC))

    - by Pheter
    Hi, I am running Wamp on my computer (the host). I have set up several virtualhosts in apache and they are working fine when I access them from the same computer (host). I have installed Windows XPMode on my computer (which is running windows 7). XPMode (which uses Virtual PC) is set up to use a NAT network. The network in XPMode is working fine, and I can access the host PC via the IP address 192.168.1.5, just as I would if I was using any physical computer on the same network. I can view all the web pages at 192.168.1.5 and it's subdirectories. However, I cannot access any of the subdomains that are configured in the virtualhosts of the host computer. How can I access the subdomains? I don't think that the fact that I am using XPMode and am using a virtualized OS has anything to do with it, but I thought that it was worth mentioning.

    Read the article

  • Windows Server 2008 R2 LDAPS

    - by Chad Moran
    I have a Server 2008 R2 server with ADDS installed. I'm trying to configure HP's ILO utility to connect to it over SSL. I installed the Active Directory Certificate Service, after doing so I'm still not able to connect to LDAP over SSL. I checked the event log and it's showing warnings with Event ID 36886 saying that there aren't default credentials yet. I'm not too sure why this is happening. I haven't done anything with ADCS other than installing the service do I need to create a certificate for the server?

    Read the article

  • Basic proxy with OpenSSH, Cygwin, Putty

    - by clang1234
    I know this is probably a common question, but after looking around for a few hours (on this site and others), I can't find a solution. I'm trying to set up a simple proxy. I already have a server running Windows Server 2008. I've installed Cygwin and have OpenSSH installed. I also have sshd (the openssh daemon) running. Port 22 is forwarded correctly. On my client side I have Putty on a Windows 7 machine. I can successfully open a connection to my server and log in to access the shell. So what do I do next? Do I just name the ports I want tunneled in Putty or do I need to tell my SSH server what to do with those ports? Thanks for the help. Let me know if I left anything out.

    Read the article

  • How do you measure the value of your software?

    - by Mike
    Hi, One of the principles of agile is that you should measure working software: Working software is the primary measure of progress - 12 principles of Agile The thing is, while I can measure my software in terms of stories done, bugs squashed or the volume of defect reports decreasing, I'm stuck on how to measure the value of my software. If I use Mike Cohn as an example and his helping SalesForce.com deliver 500% more value to it's customers compared to the previous year* - how do I measure that increase? How do I measure where I am right now? Other metrics he uses are the number of features and the number of features per developer. This is something I could work out if my backlog was in good order and the stories were cut up by 'feature', but we're just starting out with Agile, so I need some way of working out what the value is we deliver now, then use a similar metric in say, six months, to see if we've increased our output. I've heard about measuring value of software by an uptick in revenue, or an increase in customer satisfaction (how would you measure that though?) but those increases could be attributed to anything in the company (sales, accounting, support) and not directly to the work my department is doing. So, how do you guys measure the value of your software and how did you start? Thanks, Mike *Succeeding With Agile - Mike Cohn

    Read the article

  • How can I open urls on my host machine with VMWARE?

    - by Yanamon
    I am running a Windows 7 vm with WMware player from Fedora. I have VMWare tools installed successfully and I have successfully some of it's features like Unity mode so it seems to be installed correctly. That being said I still cannot get urls to open up in my host machine's browsers, these are the steps I have taken: Within the vm I set "Default Host Application" to be the application to open urls. Within my host machine I have set Chrome to be my preferred application for opening urls. Enabled Shared Folders in the vm (Not sure if that really helped anything but I saw it suggested on a forum post) After doing that when I click on a link I get the following error message: Default host Application: Make sure the virtual machine's configuration allows the guest to open host applications. I cannot find any option like that in my vm's configuration so I am not sure what the error message is referring to.

    Read the article

  • Dynamic DNS at freedns.afraid.org using a Fritz!Box

    - by kai
    I am having some trouble setting up Dynamic DNS with my Fritz!Box 7360. I have set up the Dynamic DNS page with (this is translated from German, so might be worded a bit differently): [x] Use dynamic DNS Dynamic DNS Provider: User defined Update-URL: https://freedns.afraid.org/dynamic/update.php?MY-DIRECT-URL-KEY Domain Name: mydomain.crabdance.com User Name: myusername Password: mypassword Now on the FritzBox status page, it says: Dynamic DNS: activated, mydomain.crabdance.com, Status: Account temprarily deactivated When I check back on http://freedns.afraid.org, my IP address never changes. Is there any way to fix this? Note my router is on an IPv6 network (m-net), with IPv4 only through DS-Lite. I'm not sure whether this affects anything. Update: Following the guide here (putting myusername instead of MY-DIRECT-URL-KEY) hasn't given any succes. However, the status field has changed slightly: Dynamic DNS: activated, mydomain.crabdance.com, Status: unknown

    Read the article

  • Windows 8.1 Enterprise Sysprep Error

    - by Anurag Shetti
    I am Trying to sysprep my WIndows 8.1 enterprise (MSDN) and i get the following errors I have upgraded the Windows 8 to windows 8.1 and the machine contains all the configuration for VS 2012 and rest Exact error Sysprep was not able to validate your windows installation Error msg line in log C:\Users\André>err 0x8007139f # as an HRESULT: Severity: FAILURE (1), FACILITY_WIN32 (0x7), Code 0x139f # for hex 0x139f / decimal 5023 ERROR_INVALID_STATE winerror.h # The group or resource is not in the correct state to # perform the requested operation. # 1 matches found for "0x8007139f" SYSPRP ActionPlatform::GetValue: Error from RegQueryValueEx on value SysprepMode under key HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\Sysprep; dwRet = 0x2 I have searched the HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Setup\Sysprep But i coud'nt find anything for SYSprep mode The value for sysprep was (Value not set)

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >