Search Results

Search found 15081 results on 604 pages for 'publish folder'.

Page 233/604 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • How to compile x64 asp.net website?

    - by Eran Betzalel
    I'm trying to compile (using Visual Studio) an ASP.Net website with the Chilkat library. The compilation fails due to this error: Could not load file or assembly 'ChilkatDotNet2, Version=9.0.8.0, Culture=neutral, PublicKeyToken=eb5fc1fc52ef09bd' or one of its dependencies. An attempt was made to load a program with an incorrect format. I've been told that this error occurs because of platform noncompliance. The weird thing is that although the compilation fails, the site works once accessed from a browser. My theory is that the IIS compilation uses csc.exe compiler from the Framework64 (64 bit) folder while the Visual Studio uses csc.exe compiler from the Framework (32 bit) folder. If this is acually it, how can I configure my Visual studio to run with the 64 bit compiler for ASP.Net sites? This is my current development configuration: Windows 7 (x64). Visual Studio 2008 Pro (x86 of course...). Chilkat library (x64) IIS/Asp.net (x64).

    Read the article

  • How to add SQLite (SQLite.NET) to my C# project

    - by Lirik
    I followed the instructions in the documentation: Scenario 1: Version Independent (does not use the Global Assembly Cache) This method allows you to drop any new version of the System.Data.SQLite.DLL into your application's folder and use it without any code modifications or recompiling. Add the following code to your app.config file: <configuration> <system.data> <DbProviderFactories> <remove invariant="System.Data.SQLite"/> <add name="SQLite Data Provider" invariant="System.Data.SQLite" description=".Net Framework Data Provider for SQLite" type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite" /> </DbProviderFactories> </system.data> </configuration> My app.config file now looks like this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="userSettings" type="System.Configuration.UserSettingsGroup, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" > <section name="DataFeed.DataFeedSettings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" allowExeDefinition="MachineToLocalUser" requirePermission="false" /> </sectionGroup> </configSections> <userSettings> <DataFeed.DataFeedSettings> <setting name="eodData" serializeAs="String"> <value>False</value> </setting> </DataFeed.DataFeedSettings> </userSettings> <system.data> <DbProviderFactories> <remove invariant="System.Data.SQLite"/> <add name="SQLite Data Provider" invariant="System.Data.SQLite" description=".Net Framework Data Provider for SQLite" type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite" /> </DbProviderFactories> </system.data> </configuration> My project is called "DataFeed": using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data.SQLite; //<-- Causes compiler error namespace DataFeed { class Program { static void Main(string[] args) { } } } The error I get is: .\dev\DataFeed\Program.cs(5,19): error CS0234: The type or namespace name 'SQLite' does not exist in the namespace 'System.Data' (are you missing an assembly reference?) I'm not using the GAC, I simply dropped the System.Data.SQLite.dll into my .\dev\DataFeed\ folder. I thought that all I needed to do is add the DLL to the project folder as it was mentioned in the documentation. Any hints on how to actually make this work?

    Read the article

  • Tools to Help Post Content On Your WordPress Blog

    - by Matthew Guay
    Now that you’ve got a nice blog, you want to do more with it and start posting content.  Here we look at some tools that will allow you to post directly to your WordPress blog. Writing a new blog post is easy with WordPress as we saw in our previous post about Starting your own WordPress blog.  The web editor gives you a lot of features and even lets you edit your post’s source code if you enjoy hacking HTML.  There are other tools that will allow you to post content, here we look at how you can post with dedicated apps, browser plugins, and even by email. Windows Live Writer Windows Live Writer (part of the Windows Live Essentials Suite) is a great app for posting content to your blog.  This free program for Microsoft lets you post content to a variety of blogging services, including Blogger, Typepad, LiveJournal, and of course WordPress.  You can write blog posts directly from its Word-like editor, complete with pictures and advanced formatting.  Even if you’re offline, you can still write posts and save them for when you’re online again. For more information about installing Live writer, check out our article on how to Install Windows Live Essentials In Windows 7. Once Live Writer is installed, open it to add your blog.  If you already had Live Writer installed and configured for a blog, you can add your new blog, too.  Just click your blog’s name in the top right corner, and select “Add blog account”. Select “Other blog service” to add your WordPress blog to Writer, and click Next.   Enter your blog’s web address, and your username and password.  Check Remember my password so you don’t have to enter it every time you write something. Writer will analyze your blog and setup your account. During the setup process it may ask to post a temporary post.  This will let you preview blog posts using your blog’s real theme, which is helpful, so click Yes. Finally, add your Blog’s name, and click Finish. You can now use the rich editor to write and add content to a new blog post.   Select the Preview tab to see how your post will look on your blog… Or, if you’re a HTML geek, select the Source tab to edit the code of your blog post. From the bottom of the window, you can choose categories, insert tags, and even schedule the post to publish on a different day.  Live Writer is fully integrated with WordPress; you’re not missing anything by using the desktop editor. If you want to edit a post you’ve already published, click the Open button and select the post.  You can chose and edit any post, including ones you published via the web interface or other editors. Add Multimedia Content to your Posts with Live Writer Back in the Edit tab, you can add pictures, videos and more from the sidebar.  Select what you want to insert. Pictures If you insert a picture, you can add many nice borders and designs to it. Or, you can even add artistic effects from the Effects tab in the sidebar. Photo Gallery If you want to post several pictures, say some of your vacation shots, then inserting a picture gallery may be the best option.  Select Insert Photo Gallery in the sidebar, and then choose the pictures you want in the gallery. Once the gallery is inserted, you can choose from several styles to showcase your pictures. When you post the blog, you will be asked to sign in with your Windows Live ID as the gallery pictures will be stored in the free Skydrive storage service. Your blog readers can see the preview of your pictures directly on your blog, and then can view each individual picture, download them, or see a slideshow online via the link. Video If you want to add a video to your blog post, select Video from the sidebar as above.  You can select a video that’s already online, or you can choose a new video from file and upload it via YouTube directly from Windows Live Writer.   Note that you will have to sign in with your YouTube account to upload videos to YouTube, so if you’re not logged in you’ll be prompted to do so when you click Insert. Geek Tip:  If you ever want to copy your Live Writer settings to another computer, check out our article on how to Backup Your Windows Live Writer Settings. Microsoft Office Word Word 2007 and 2010 also let you post content directly to your blog.  This is especially nice if you’ve already typed up a document and think it would be good on your Blog as well.  Check out our in-depth tutorial on posting blog posts via Word 2007 using Word 2007 as a blogging tool. This works in Word 2010 too, except the Office Orb has been replaced by the new Backstage view.  So, in Word 2010, to start a new blog post, click File \ New then select Blog post.  Proceed as you would in Word 2007 to add your blog settings and post the content you want. Or, if you’ve already written a document and want to post it, select File \ Share (or Save and Send in the final version of Word 2010), and then click Publish as Blog Post.  If you haven’t setup your blog account yet, set it up as shown in the Word 2007 article. Post Via Email Most of us use email daily, and already have our favorite email app or service.  Whether on your desktop or mobile phone, it’s easy to create rich emails and add content.  WordPress lets you generate a unique email address that you can use to easily post content and email to your blog.  Just compose your email with the subject as the title of your post, and send it to this unique address.  Your new post will be up in minutes. To active this feature, click the My Account button in the top menu bar in your WordPress.com account, and select My Blogs. Click the Enable button under Post by Email beside your blog’s name.   Now you’ll have a private email you can use to post to your blog.  Anything you send to this email will be posted as a new post.  If you think your email may be compromised, click Regenerate to get a new publishing email address. Any email program or webapp now is a blog post editor.  Feel free to use rich formatting or insert pictures; it all comes through great.  This is also a great way to post to your blog from your mobile device.  Whether you’re using webmail or a dedicated email client on your phone, you can now blog from anywhere.   Mobile Applications WordPress also offer dedicated applications for blogging directly from your mobile device.  You can write new posts, edit existing ones, and manage comments all from your Smartphone.  Currently they offer apps for iPhone, Android, and Blackberry.  Check them out at the link below. Conclusion Whether you want to write from your browser or email a post to your blog, WordPress is flexible enough to work right along with your preferences.  However you post, you can be sure that it will look professional and be easily accessible with your WordPress blog. Download Windows Live Writer Download WordPress apps for your mobile device Similar Articles Productive Geek Tips Quick Tip: Set a Future Date for a Post in WordPressAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogFuture Date a Post in Windows Live WriterHow To Start Your Own Professional Blog with WordPressUsing Word 2007 as a Blogging Tool TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Fail to launch application (CreateProcess error=87), can't use shorten classpath workaround

    - by Ivo Bosticky
    When I launch our application in Eclipse on Windows I receive the following error: Exception occured executing command line. Cannot run program .. : CreateProcess error=87, The parameter is incorrect I've solved this in the past by shortening the CLASSPATH. I've now come to a point where I can no longer shorten the CLASSPATH, and would like to know if there are any other workarounds. http://support.microsoft.com/kb/830473 seems to indicate that the max command prompt line length in windows xp is 8191 characters, and the only solution is to shorten folder names, reduce depth of folder trees, using parameter files, etc.

    Read the article

  • creating Linq to sqlite dbml from DbLinq source code

    - by Veer
    Hi All, I tried to create linq-to-sqlite dbml using DbLinq but in vain. Each time I get different type of errors. May be I'm somewhere wrong. Can anyone tell me the step by step procedure to create the dbml file from the Dblinq source code. Edit: Steps I Followed: I downloaded the source file from this link. Edited the "run_sqliteMetal.bat" file in \\DbLinq-0.19\src\DbMetal folder as 'DbMetal.exe -database:myDb.db3 -namespace:myNS -code:myCode.cs' -dbml:myDbml.dbml Tried to run the DbMetal project file to produce the executable but there was a runtime error since Options object in Parameters.cs was null. Hence downloaded the readymade exe file from this location Copied it into the \DbLinq-0.19\src\DbMetal folder Executed the "run_sqliteMetal.bat" file I got a blank screen and Windows Error Msg "DbLinq has stopped Working" Any help? Thanks in advance, Veer

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • BizTalk server problem

    - by WtFudgE
    Hi, we have a biztalk server (a virtual one (1!)...) at our company, and an sql server where the data is being kept. Now we have a lot of data traffic. I'm talking about hundred of thousands. So I'm actually not even sure if one server is pretty safe, but our company is not that easy to convince. Now recently we have a lot of problems. Allow me to situate in detail, so I'm not missing anything: Our server has 5 applications: One with 3 orchestrations, 12 send ports, 16 receive locations. One with 4 orchestrations, 32 send ports, 20 receive locations. One with 4 orchestrations, 24 send ports, 20 receive locations. One with 47 (yes 47) orchestrations, 37 send ports, 6 receive locations. One with common application with a couple of resources. Our problems have occured since we deployed the applications with the 47 orchestrations. A lot of these orchestrations use assign shapes which use c# code to do the mapping. This is because we use HL7 extensions and this is kind of special, so by using c# code & xpath it was a lot easier to do the mapping because a lot of these schema's look alike. The c# reads in XmlNodes received through xpath, and returns XmlNode which are then assigned again to biztalk messages. I'm not sure if this could be the cause, but I thought I'd mention it. The send and receive ports have a lot of different types: File, MQSeries, SQL, MLLP, FTP. Each of these types have a different host instances, to balance out the load. Our orchestrations use the BiztalkApplication host. On this server also a couple of scripts are running, mostly ftp upload scripts & also a zipper script, which zips files every half an hour in a daily zip and deletes the zip files after a month. We use this zipscript on our backup files (we backup a lot, backups are also on our server), we did this because the server had problems with sending files to a location where there were a lot (A LOT) of files, so after the files were reduced to zips it went better. Now the problems we are having recently are mainly two major problems: Our most important problem is the following. We kept a receive location with a lot of messages on a queue for testing. After we start this receive location which uses the 47 orchestrations, the running service instances start to sky rock. Ok, this is pretty normal. Let's say about 10000, and then we stop the receive location to see how biztalk handles these 10000 instances. Normally they would go down pretty fast, and it does sometimes, but after a while it starts to "throttle", meaning they just stop being processed and the service instances stay at the same number, for example in 30 seconds it goes down from 10000 to 4000 and then it stays at 4000 and it lowers very very very slowly, like 30 in 5minutes or something. So this means, that all the other service instances of the other applications are also stuck in here, and they are also not processed. We noticed that after restarting our host instances the instance number went down fast again. So we tried to selectively restart different host instances to locate the problem. We noticed that eventually restarting the file send/receive host instance would do the trick. So we thought file sends would be the problem. Concidering that we make a lot of backups. So we replaced the file type backups with mqseries backups. The same problem occured, and funny thing, restarting the file send/receive host still fixes the problem. No errors can be found in the event viewer either. A second problem we're having is. That sometimes at arround 6 am, all or a part of the host instances are being stopped. In the event viewer we noticed the following errors (these are more than one): The receive location "MdnBericht SQL" with URL "SQL://ZNACDBPEG/mdnd0001/" is shutting down. Details:"The error threshold has been exceeded. The receive location is shutting down.". The Messaging Engine failed to add a receive location "M2m Othello Export Start Bestand" with URL "\m2mservices\Othello_import$\DataFilter Start*.xml" to the adapter "FILE". Reason: "The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. ". The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. An attempt to connect to "BizTalkMsgBoxDb" SQL Server database on server "ZNACDBBTS" failed. Error: "Login failed for user ''. The user is not associated with a trusted SQL Server connection." It woould seem that there's a login failure at this time and that because of it other services are also experiencing problems, and eventually they are shut down. The thing is, our user is admin, and it's impossible that it's password is wrong "sometimes". We have concidering that the problem could be due to an infrastructure problem, but that's not really are department. I know it's a long post, but we're not sure anymore what to do. Would adding another server and balancing the load solve our problems? Is there a way to meassure our balance and know where to start splitting? What are normal numbers of load etc? I appreciate any answers because these issues are getting worse and we're also on a deadline. Thanks a lot for replies!

    Read the article

  • installing xuggler java libraries in Tomcat on Linux

    - by Dilip Shah
    I'm trying to install xuggler Java libraries in Tomcat (version 5.5) on fedora-release-7-3 Should I install the binaries available for download on xuggler website or build my own (http://www.xuggle.com/xuggler/downloads/build.jsp)? I took the easy step first and installed the readymade binaries downloaded from http://www.xuggle.com/xuggler/downloads/ in usr/local/xuggler folder on my Linux server and then copied the jar files from share/java/jars folder to Tomcat's $CATALINA_HOME/shared/lib directory (as recommended by http://wiki.xuggle.com/Frequently_Asked_Questions#I_get_an_.22UnsatisfiedLinkError.22_when_I_run_Xuggler-based_Applications_in_Tomcat These are some 6 .jar files, including xuggle-xuggler.jar After restarting Tomcat, I'm still getting "java.lang.UnsatisfiedLinkError: no xuggle-xuggler in java.library.path" exception when my Java code attempts to invoke some xuggler method such as the one to find video duration of an flv file. What am I doing wrong? Any help is very much appreciated!

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • File io error Python

    - by serpiente
    I have a program that monitors a folder with word documents for any modifications made on the files. The error -Windows Error[2] The system cannot find the file specified- comes when I run the program, open a .doc within the folder make some changes and save it. Any suggestions on how to fix this? Here's the code: def archivar(): txt = open('archivo.txt', 'r+' ) for rootdir, dirs, files in os.walk(r"C:\Users\keinsfield\Desktop\colegio"): for file in files: time = os.stat(os.path.join(rootdir, file)).st_ctime txt.write(file +','+str(time) + '\n') def check(): txt = [col.split(',') for col in (open('archivo.txt', 'r+').read().split('\n'))] files = os.listdir(r"C:\Users\keinsfield\Desktop\colegio") for file in files: for info in txt: if info[0]==os.stat(os.path.join(r"C:\Users\keinsfield\Desktop\colegio",file)).st_ctime: print "modified"

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • Outlook 2003 View Control with C#

    - by Eleasar
    I want to embed a custom c# windows form (or WPF) user control into an outlook view. I am using Outlook 2003 and Visual Studio 2008. I did download an example for Outlook 2007 here: http://blogs.msdn.com/e2eblog/archive/2008/01/09/outlook-folder-homepage-hosting-wpf-activex-and-windows-forms-controls.aspx and also here: http://msdn.microsoft.com/en-us/library/aa479345.aspx I tested it and under 2007 it is working, but for 2003 i am getting the following error when i want to open the view: Could not complete the operation due to error 80131509 I can start it from Visual Studio, it is registering the folder just fine, debugging works and all that. It creates an HTML page that contains my type as an object parameter - but the Initialize method that should be called is either not present (not shown via JS) or it has some errors. The breakpoints for RegisterSafeForScripting are also never hit - maybe related to that. Thx in advance!

    Read the article

  • JMeter to grab full querystring into a variable for future use

    - by jfbauer
    Someone provided me the regex to parse out a query string: (?<=\?)[^?]+$ I am trying to use that in JMeter with no luck (although I am successful in pulling out individual query string parameter values based on various example postings on the web). I created a regular expression extractor called "Grab QueryString". I selected the URL response field to check. For the reference name, I typed "myQueryString". For the regular expression, I entered your text. For template, I entered $1$ Match no = 1 Default Value = ERROR Unfortunately, "myQueryString" is getting populated with ERROR and not the URL query string as hoped when I try and use it as a parameter in a future GET. Thus, I see this in the "View Results Tree": https:/www.website.com/folder/page.aspx?ERROR Instead of: https:/www.website.com/folder/page.aspx?jfhjHSDjgdjhsjhsdhjSJHWed Did I do something wrong? Anyone have any suggestions?

    Read the article

  • nmake makefile, linking objects files in a subfolder

    - by Gauthier
    My makefile defines a link command: prod_link = $(LINK) $(LINK_FLAGS) -o$(PROD_OUT) $(PROD_OBJS) where $(PROD_OBJS) is a list of object files of the form: PROD_OBJS = objfile1.obj objfile2.obj objfile3.obj ... objfileN.obj Now the makefile itself is at the root of my project directory. It gets messy to have object and listing files at the root, I'd like to put them in a subfolder. Building and outputing the obj files to a subfolder works, I'm doing it with suffixes and inference: .s.obj: $(ASSEMBLY) $(FLAGS) $*.s -o Objects\$*.obj The problem is to pass the Objects folder to the link command. I tried: prod_link = $(LINK) $(LINK_FLAGS) -o$(PROD_OUT) Objects\$(PROD_OBJS) but only the first file in the list of object files gets the folder's name. How can I pass the Objects subfolder to all files of my list $(PROD_OBJS)? EDIT I tried also PROD_OBJS = $(patsubst %.ss,Object\%.obj, $(PROD_SRC)) but got: makefile(51) : fatal error U1000: syntax error : ')' missing in macro invocation Stop. This is quite strange...

    Read the article

  • Trace not working in a .NET DLL loaded from VB6 EXE

    - by Luis
    Hello I have a .NET DLL that writes to the Trace. But seems that when I call my DLL from a VB6 EXE the trace is not working. I have created an myApp.config file in the EXE folder with the trace configuration, but this does not solves the issue. I've also tried creating the Trace objects in code,but doesn't work: Dim _traceSrc as TraceSource= New TraceSource("myTraceSorce") Dim flListener As FileLogTraceListener = New FileLogTraceListener("myFileLogTraceListener") Dim tSwitch As SourceSwitch = New SourceSwitch("mySwitch") tSwitch.Level = _logLevel If I call my DLL from a .NET EXE it works, even if I dont have the app.config in the EXE folder, becusase I set it in code if the config is not found. Thanks

    Read the article

  • Jquery Uploadify checkscript

    - by Kevin McPhail
    I am trying to implement the checkscript feature of uploadify in an asp.net mvc view but i can't determine what the key is i should be using on the controller side. Below is the php script but i am not very familiar with php and can't determine what php is scraping out of the httprequest. Has anyone implemented this? The documentation is a little sparse (as in nonexistent). $fileArray = array(); foreach ($_POST as $key => $value) { if ($key != 'folder') { if (file_exists($_SERVER['DOCUMENT_ROOT'] . $_POST['folder'] . '/' . $value)) { $fileArray[$key] = $value; } } } echo json_encode($fileArray); ?>

    Read the article

  • ASP.Net Application Trust Medium File IO Outside Virtual Directory

    - by Trey Gramann
    I am trying to determine how suicidal this is... I have a hosting environment where a custom ASP.Net CMS application needs to access the files in the root folder of a website even though it is in a virtual folder so it can be shared accross many sites. I can modify the Medium trust on the server and came up with this... <IPermission class="FileIOPermission" version="1" Read="$AppDir$;$AppDir$\.." Write="$AppDir$;$AppDir$\.." Append="$AppDir$;$AppDir$\.." PathDiscovery="$AppDir$;$AppDir$\.."/> Oddly enough, it works. Yes, I understand it is doing this for all the Apps. I am a bit at a loss as to easy ways to test what else is being exposed. Feels dangerous. Opinions?

    Read the article

  • Rewrite rules for subfolders

    - by pg
    This may seem like a silly question but I can't figure it out. let's say I have a public_html folder with various folders like: Albatross, Blackbirds, Crows and Faqs. I want to make it so that any traffic to Albatross/faqs.php, Blackbirds/faqs.php, Crows/faqs.php etc will see the file that is at faqs/faqs.php?bird=albatross or faqs/faqs.php?bird=crows or what have you. If I go into the Albatross folder's .htaccess file I can do this RewriteRule faqs.php$ /faqs/faqs.php?cat=albatross[QSA] Which works fine, but I want to put something in the top level .htacces that works for all of them, so tried: RewriteRule faqs.php$ /faqs/faqs.php?cat=albatross[QSA] RewriteRule /(.*)/faqs.php$ /faqs/faqs.php?cat=$1 [QSA] and even RewriteRule /albatross/faqs.php$ /faqs/faqs.php?cat=albatross [QSA] and various others but nothing seems to work, when I go to http://www.birdsandwhatnot.com/albatross/faqs.php I see the same file the same way it's always been. Does the presence of an .htaccess file in the subfolder conflict with the higher up .htaccess file? Am I missing something?

    Read the article

  • faking a filesystem / virtual filesystem

    - by attwad
    I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts). The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how. What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user. Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.

    Read the article

  • Windows Explorer argument not being interpreted?

    - by MarceloRamires
    I'm currently using: Process.Start("explorer.exe", "/Select, " + fullPath); //with file name.extension That basically tells windows explorer to open the folder where the file is in, with the given file selected. But, by the forces of evil, sometimes it works and sometimes it just opens the file ("/select" is... ignored) Has anyone ever experienced this? If it changes anything, it's a file in a local network and the path looks like this \\server\folder\subfolder\something\file.ext (of course in c# every '\' is doubled, and @ is not a choice, the path is generated by something else) I'll be constantly reading comments to supply any aditional information

    Read the article

  • Using Repository and Unit of Work patterns with Entity Framework 4.0 and MVC 2

    - by Mr. D
    Hi, I'm following this article Using Repository and Unit of Work patterns with Entity Framework 4.0. I'm tying to implement the Repository and Unit of work pattern, using Asp.Net MVC 2 and Entity Framework 4. Please let me know if I'm doing it right... In the Models folder: Northwind.edmx Products.cs (POCO class) ProductRepository.cs (Did my product query) IProductRepository.cs NorthwindContext.cs IUnitOfWork.cs In the Controller folder: ProductController.cs (Retrieve from ProductRepository.cs and Pass it to the view) When I run the application, I'm getting error message: Mapping and metadata information could not be found for EntityType 'NorthwindMvcPoco.Models.Category'. I don't know what I'm doing wrong. I search through whole web and I couldn't resolve this issue. Please help me.

    Read the article

  • Open flash chart not working

    - by Axel
    I'm trying to implement Open Flash Chart in my website but it doesn't work. The chart just start loading for a second and the loader animation disappear and nothing happening (Only a black swf area). i've downloaded the latest version which is 2 and here is my folders scheme: // ROOT // ? JS ? open-flash-chart ? php-ofc-library - open-flash-chart.swf - mydata.php - mypage.html This is mydata.php content: {"elements":[{"type":"bar","values":[1,2,3,4,5,6,7,8,9]}],"title":{"text":"Wed Apr 21 2010"}} This is mypage.html content: <script type="text/javascript" src="js/swfobject.js"></script> <script type="text/javascript"> swfobject.embedSWF("open-flash-chart.swf", "my_chart", "550", "200","9.0.0", expressInstall.swf", {"data-file":"mydata.php"} ); </script> <div id="my_chart"></div> The JS folder contain swfobject and the open-flash-chart folder contain the action script classes of the chart Is there any mistake i did? Thanks

    Read the article

  • python metaprogramming

    - by valya
    I'm trying to archive a task which turns out to be a bit complicated since I'm not very good at Python metaprogramming. I want to have a module locations with function get_location(name), which returns a class defined in a folder locations/ in the file with the name passed to function. Name of a class is something like NameLocation. So, my folder structure: program.py locations/ __init__.py first.py second.py program.py will be smth with with: from locations import get_location location = get_location('first') and the location is a class defined in first.py smth like this: from locations import Location # base class for all locations, defined in __init__ (?) class FirstLocation(Location): pass etc. Okay, I've tried a lot of import and getattribute statements but now I'm bored and surrender. How to archive such behaviour?

    Read the article

  • Eclipse Could not Delete error

    - by KáGé
    Hello I'm working on a project with Eclipse and by now everything was fine, but last time I've tried building it, it returned the error "The project was not built due to "Could not delete '/Torpedo/bin/bin'.". Fix the problem, then try refreshing this project and building it since it may be inconsistent Torpedo Unknown Java Problem" and it deleted my bin folder which stores all the images and stuff needed for the program. (Fortunately I had a backup). I've tried googling it and tried every solution I found, but nothing helped, and also most of them suggests to delete the folder by hand, which I can't. What should I do? Thanks.

    Read the article

  • ASP.NET Charting Control no longer working with .NET 4

    - by Moose Factory
    I've just upgraded to .NET 4 and my ASP.NET Chart Control no longer displays. For .NET 3.5, the HTML produced by the control used to look like this: <img id="20_Chart" src="/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> and now, for .NET 4, it looks like this (note the change in the source path): <img id="20_Chart" src="/Statistics/Summary/ChartImg.axd?i=chart_5f6a8fd179a246a5a0f4f44fcd7d5e03_0.png&amp;g=16eb7881335e47dcba16fdfd8339ba1a" alt="" style="height:300px;width:300px;border-width:0px;" /> The chart is in an MVC partial view that is in an MVC Area folder called "Statistics" and a MVC Views folder called "Summary" (i.e. "/Areas/Statistics/Views/Summary"), so this is obviously where the change of path is coming from. All I've done is to switch the System.Web.DataVisualization assembly from, 3.5 to 4.0. Any help greatly appreciated.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >