Search Results

Search found 14206 results on 569 pages for 'compressed folder'.

Page 272/569 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • Exited event of Process is not rised?

    - by Kanags.Net
    In my appliation,I am opening an excel sheet to show one of my Excel documents to the user.But before showing the excel I am saving it to a folder in my local machine which in fact will be used for showwing. While the user closes the application I wish to close the opened excel files and delete all the excel files which are present in my local folder.For this, in the logout event I have written code to close all the opened files like shown below, Process[] processes = Process.GetProcessesByName(fileType); foreach (Process p in processes) { IntPtr pFoundWindow = p.MainWindowHandle; if (p.MainWindowTitle.Contains(documentName)) { p.CloseMainWindow(); p.Exited += new EventHandler(p_Exited); } } And in the process exited event I wish to delete the excel file whose process is been exited like shown below void p_Exited(object sender, EventArgs e) { string file = strOriginalPath; if (File.Exists(file)) { //Pdf issue fix FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read); fs.Flush(); fs.Close(); fs.Dispose(); File.Delete(file); } } But the problem is this exited event is not called at all.On the other hand if I delete the file after closing the MainWindow of the process I am getting an exception "File already used by another process". Could any help me on how to achieve my objective or give me an reason why the process exited event is not being called?

    Read the article

  • Updating iOS application content which include images

    - by azamsharp
    I am working on a Vegetable gardening application. Apart from the vegetable name and description I also have vegetable image. Currently, I have all the images in the Supported Files folder in the Xcode project. But later on I want to update the application dynamically without having the user download a new version. When the user updates the application or downloads new data from the server that data will include the images. Can I store those images in the supporting file folder or somewhere where they can be references by just the name. RELATED QUESTION: I will also allow the user to take pictures of their vegetables and then write notes about the vegetables like "just planted", "about to harvest" etc. What is the recommended approach for storing pictures/photos. I can always store them in the user's photo library and then store the reference in the local database and then fetch and display the picture using the reference. The problem with that approach might be that if the user accidentally deletes the picture from the library then it will no longer be displayed in my application. The only way I see if to store the picture in the app local database as a BLOB.

    Read the article

  • TreeView Control Problem

    - by ProgNet
    Hi all, I have a public folder on the server that contains recursively nested sub folders. In the various Leaf folders contains Images. I wanted to create a server side file browser that will display the Images to the user. I am using the ASP.NET TreeView Control. I create the tree nodes using PopulateOnDemand. If the user click on a leaf directory I want the images in that folder to be displayed in a DataList Control. The problem is that when I click on a sub tree node (after I expanded it parent node) All the expanded sub tree disappears and only the parent node is showed with no + sign next to it !! ( I have set the TreeView's PopulateNodesFromClient property to true ) Can someone tell me what is the problem ?? Thanks Here is the code : <asp:TreeView ID="TreeView1" runat="server" AutoGenerateDataBindings="False" onselectednodechanged="TreeView1_SelectedNodeChanged" ontreenodepopulate="TreeView1_TreeNodePopulate"> </asp:TreeView> protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { string path = Server.MapPath("."); PopulateTopNodes(path); } } private void PopulateTopNodes(string pathToRootFolder) { DirectoryInfo dirInfo = new DirectoryInfo(pathToRootFolder); DirectoryInfo[] dirs = dirInfo.GetDirectories(); foreach (DirectoryInfo dir in dirs) { TreeNode folderNode = new TreeNode(dir.Name,dir.FullName); if (dir.GetDirectories().Length > 0) { folderNode.PopulateOnDemand = true; folderNode.Collapse(); } TreeView1.Nodes.Add(folderNode); } } protected void TreeView1_TreeNodePopulate(object sender, TreeNodeEventArgs e) { if (IsCallback == true) { if (e.Node.ChildNodes.Count == 0) { LoadChildNode(e.Node); } } } private void LoadChildNode(TreeNode treeNode) { DirectoryInfo dirInfo = new DirectoryInfo(treeNode.Value); DirectoryInfo[] dirs = dirInfo.GetDirectories(); foreach (DirectoryInfo dir in dirs) { TreeNode folderNode = new TreeNode(dir.Name, dir.FullName); if(dir.GetDirectories().Length>0){ folderNode.PopulateOnDemand = true; folderNode.Collapse(); } treeNode.ChildNodes.Add(folderNode); } } protected void TreeView1_SelectedNodeChanged(object sender, EventArgs e) { // Retrieve the images here }

    Read the article

  • Can I share code & resources between Android projects without using a library?

    - by Tom
    The standard advice for sharing code & resources between Android projects is to use a library. Personally I find this works poorly if (a) the shared code changes a lot, or (b) your computer isn't fast enough. I also don't want to get into deploying multiple APK's, which seems to be necessary when I use dependent projects (i.e. Java Build Path, Projects Tab). On the other hand, sharing a folder of source code by using the Eclipse linked source feature works great (Java Build Path, Source tab, Link Source button), but for these two issues: 1) I can't use the same technique to share resources. I can create the link to the resources parent folder but then things get wonky and the shared resources don't get compiled (I'm using ADT 21). 2) So then I settle for copying the shared resources into each project, but this doesn't work because either. The shared code can't import the copy of its resources because it doesn't know the package name of the project that uses it. The solution I've been using is to access the resources dynamically, but that has become cumbersome as the number of resources grows. So, I need a solution to either (1) or (2), or I'll have to go back to a library project. (Or maybe there is another option I haven't thought of?)

    Read the article

  • Running Sitecore Production Site under a Virtual Directory

    - by danswain
    We are using Sitecore 6 on a Windows Server 2003 (32bit) dev machine. I know it's not recommended for the CMS editing site, but we've been told it is possible to get the front-end Sitecore websites to run from within a virtual directory. Here's the issue: we'd like to achieve what the below poor mans diagram shows. We have a website (.net 1.1) /WebSiteRoot (.net 1.1) | | |---- Custom .net 1.1 Web Application | |---- SiteCore frontend WebApplication (.net 2.0) | |---- Custom .net 2.0 WebApplication The Sitecore webApplication would contain the Sitecore pipeline in its web.config and we'd make use of the section to configure the virtual folder to allow for where our Sitecore app sits and point it to the appropriate place in the Content Tree. Is it possible to pull this off? This is just the customer facing website, there will be no CMS editing functionality on these servers, that will be done from a more standard Sitecore install inside the firewall on a different server. The errors we're encountering are centered around loading the the various config files in the App_Config folder. It seems to do a Server.MapPath on "/" initially (which is wrong for us) so we've tried putting absolute paths in the web.config and still no joy (I think there must be some hardcoded piece that looks for the Include directory). Any help would be greatly appreciated. Thanks

    Read the article

  • how to delete multiple folders,desktop and start menu shortcut using vbscript

    - by user1756858
    I never did any vbscript before, so i don't know if my question is very easy one. Following is the flow of steps that has to be done : Check if exist and delete a folder at c:\test1 if found and continue. If not found continue. Check if exist and delete a folder at c:\programfiles\test2 if found and continue. If not found continue. Check if a desktop shortcut and start menu shortcut exist and delete if found. If not exit. I could delete 2 folders with the following code: strPath1 = "C:\test1" strPath1 = "C:\test1" DeleteFolder strPath1 DeleteFolder strPath1 Function DeleteFolder(strFolderPath1) Dim objFSO, objFolder Set objFSO = CreateObject ("Scripting.FileSystemObject") If objFSO.FolderExists(strFolderPath) Then objFSO.DeleteFolder strFolderPath, True End If Set objFSO = Nothing But i need to run one script to delete 2 folders in different paths, 2 shortcuts one in start menu and one on desktop. I was experimenting with this code to delete the shortcut on my desktop: Dim WSHShell, DesktopPath Set WSHShell = WScript.CreateObject("WScript.Shell") DesktopPath = WSHShell.SpecialFolders("Desktop") on error resume next Icon = DesktopPath & "\sample.txt" Set fs = CreateObject("Scripting.FileSystemObject") Set A = fs.GetFile(Icon) A.Delete WScript.Quit It works fine for txt file on desktop, but how do i delete a shortcut for an application from desktop as well as start menu.

    Read the article

  • Dealing with image upload on server

    - by user1073320
    I have got a the following problem: I have got multi-step form where in one step user upload image to server and then few steps further supplies other information, when this information is invalid no data should be commited - also the image should be deleted. I was thinking about PHP session, but I've read here PHP - Store Images in SESSION data? that it is inefficient way. Every time you proceed step in the form the image is reloaded (in the session) and as somebody mentioned "You will want it to only be as big as it needs to be and you need to delete it as soon as you don't need it because large pieces of information in the session will slow down the session startup." - here i got a question: will it slow down the stratup the session of user who upload file or sessions of all users? I have to mention that I'm looking for solution that doesn't rely on operating system scripts (cron or etc) - I have no permission to run such scripts. The perfect solution for me would be: saving image on disk (for example in some folder named after session id) then after the latest step of form move this image or delete depending on form validation. If user unexpectedly destroy the session (for example closing the browser) of course the folder with image should be deleted. In nutshell I need somethig like callback to event "destroying session".

    Read the article

  • php lampp permissions of fopen function

    - by marmoushismail
    hi i'm programming php using: netbeans 6.8 lampp for ubuntu (xampp) apache which came with xampp $fh = fopen("testfile2.txt", 'w') or die("Failed to create file"); $text ="hello man cool good"; fwrite($fh, $text) or die("Could not write to file"); fclose($fh); echo "File 'testfile.txt' written successfully"; //i get the next error: Warning: fopen(testfile2.txt) [function.fopen]: failed to open stream: Permission denied in /home/marmoush/allprojects/phpprojects/myindex.php on line 91 Failed to create file anyway i know what this error is; it's about folder and files permissions; i looked into the folder permission tab made access available for "others" group ( to read and write) the program worked result was a file (test.txt) so i looked at the created file permission it appears to be that (php , xampp or whoever) creates file with (nobody permission) I have 2 QUESTIONS: 1- what if i need the file created by (php code and xampp ) to have the "root or user or myname" permissions ?? where to set this setting 2-also my concern (what if i send this files to actual web server will it make nobody permissions also nobody ? when they create files

    Read the article

  • only default controller is loading for all request - Critical

    - by Jayapal Chandran
    Hi, My codeigniter project is in live. I have two copies of it. One in the root and another in a subfolder. Both are configered to work normal. The root copy if the one which was made after testing in a subfolder. While running from the a subfolder all worked well. But when copied to the root folder the default controller is loading for all requests. But were as in subfolders and in other servers it is working well. It is like the following A true copy in root folder like sitename.com and another true copy in a subfolder like sitename.com/abc when requesting like this sitename.com/gallery the default controller is loaded instead of gallery controller. When i tried like this sitename.com/index.php/gallery/ then it worked well... but sitename.com/gallery/ is showing only the default controller. that is the index page. here is my htaccess... php_flag magic_quotes_gpc off php_flag short_open_tag on RewriteEngine on RewriteCond $1 !^(index\.php|images|css|static|font|xml|flash|galleryimages|htc|store|robots\.txt) RewriteRule ^(.*)$ index.php/$1 [L] The server is Linux barracuda.elinuxservers.com 2.6.27.18-21 #1 SMP Tue Aug 25 18:13:37 UTC 2009 i686 PHP Version 5.2.9

    Read the article

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • Declaring models elsewhere than in "models.py" AND dynamically

    - by sebpiq
    Hi ! I have an application that splits models into different files. Actually the folder looks like : >myapp __init__.py models.py >hooks ... ... myapp don't care about what's in the hooks, folder, except that there are models, and that they have to be declared somehow. So, I put this in myapp.__init__.py : from django.conf import settings for hook in settings.HOOKS : try : __import__(hook) except ImportError as e : print "Got import err !", e #where settings.HOOKS = ("myapp.hooks.a_super_hook1", ...) The problem is that it doesn't work when I run syncdb(and throws some strange "Got import err !"... strange considering that it's related to another module of my program that I don't even import anywhere :/ ) ! So I tried successively : 1) for hook in settings.HOOKS : try : exec ("from %s import *" % hook) - doesn't work either : syncdb doesn't install the models in hooks 2) from myapp.hooks.a_super_hook1 import * - This works 3) exec("from myapp.hooks.a_super_hook1 import *") - This works to So I checked that in the test 1), the statement executed is the same than in tests 2) and 3), and it is exactly the same ... Any idea ???

    Read the article

  • git submodule svn external

    - by Jason
    Let's say I have 3 git repositories, each with a lib and tests folder in the root. All 3 repositories are part of what I want to be a single package, however it is important to me to keep the repositories separate. I am new to git coming from svn, so I have been reading up on submodules and how they differ from svn:externals. In SVN I could have a single lib/vendor/package directory, and inside package I could setup 3 externals pointing to each of my 3 repositories lib directory, renaming it appropriately like lib/vendor/package/a -> repo1/lib lib/vendor/package/b -> repo2/lib lib/vendor/package/c -> repo3/lib but from my understanding this is not possible with git. Am I missing something? Really I'm hoping this can be solved in one of two ways. Someone will point out how to create a 4th git repository which has the other 3 as submodules organized as I have mentioned above (where I can have an a, b, and c folder inside the root) Someone will point out how to set this up using svn:externals in combination with githubs svn support, referencing the lib directory within each git repository (from my understanding this is impossible)

    Read the article

  • Can't display image with Imageview on Android

    - by user1029167
    In my \drawable-mdpi folder, I have an image named: test.jpg In my main.xml file, in my LinearLayout section, I have: <ImageView android:id="@+id/test_image" android:src="@drawable/test" android:layout_width="wrap_content" android:layout_height="wrap_content" /> In my src folder, I have only 1 file, HelloAndroidActivity.java with only the following method: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageView image = new ImageView(this); image = (ImageView) findViewById(R.id.test_image); setContentView(image); } This seems to be correct, yet whenever I try to run it, I get The application HelloAndroid (process xxxxx) has stopped unexpectedly. Please try again. The strange part is it previously did display the image, but now it won't and I don't know why. Also, when I comment out the ImageDisplay code, and replace it with TextDisplay code. i.e. TextView tv = new TextView(this); tv.setText("Does this work?"); setContentView(tv); It does display the text. Edit: was asked to post logcat. Link to pastebin.

    Read the article

  • .net web service annoying issue

    - by JL
    Excuse the title, but it's best I just explain the problem. I have 2 projects in my solution A Class Library A Web Application, which consists of a web service (asmx). the web service has code sitting in the app_code folder, with a file [webservicename].cs Inside the webservice code behind class, I have a web method here is a sample example (its simplified): [WebMethod] public EnumTaskExportState ProcessTask() { var tm = new UploadTaskManager(); return tm.ProcessTask(); } Now at design time, in visual studio (2010 or 2008), when I right click on UploadTaskMananger, and then select "Go to definition". I get taken to AppData\Temp[some folder structure]...etc.... and it displays the public class definition. Instead I would like to have complete integration, so that I get taken directly to the actual class in the class library project. My guess is, this is happening because I am using the app_code route, and not a compiled file for the web service class. But I don't know any other way to do this. How can I fix this? Possibly do away with the need for the app_code directory?

    Read the article

  • Problem deleting .svn directories on Windows XP

    - by John L
    I don't seem to have this problem on my home laptop with Windows XP, but then I don't do much work there. On my work laptop, with Windows XP, I have a problem deleting directories when it has directories that contain .svn directories. When it does eventually work, I have the same issue emptying the Recycle bin. The pop-up window says "Cannot remove folder text-base: The directory is not empty" or prop-base or other folder under .svn This continued to happen after I changed config of TortoiseSVN to stop the TSVN cache process from running and after a reboot of the system. Multiple tries will eventually get it done. But it is a huge annoyance because there are other issues I'm trying to fix, so I'm hoping it is related. 'Connected Backup PC' also runs on the laptop and the real problem is that cygwin commands don't always work. So I keep thinking the dot files and dot directories have something to do with both problems and/or the backup or other process scanning the directories is doing it. But I've run out of ideas of what to try or how to identify the problem further.

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • CodeIgniter -- unable to use an object

    - by Smandoli
    THE SUMMARY: When I call .../index.php/product, I receive: Fatal error: Call to a member function get_prod_single() on a non-object in /var/www/sparts/main/controllers/product.php on line 16 The offending Line 16 is: $data['pn_oem'] = $this->product_model->get_prod_single($product_id); Looks like I don't know how to make this a working object. Can you help me? THE CODE: In my /Models folder I have product_model.php: <?php class Product_model extends Model { function Product_model() { parent::Model(); } function get_prod_single($product_id) { //This will be a DB lookup ... return 'foo'; //stub to get going } } ?> In my /controllers folder I have product.php: <?php class Product extends Controller { function Product() { parent::Controller(); } function index() { $this->load->model('Product_model'); $product_id = 113; // will get this dynamically $data['product_id'] = $product_id; $data['pn_oem'] = $this->product_model->get_prod_single($product_id); $this->load->view('prod_single', $data); } } ?>

    Read the article

  • Granting administrator privileges to an application launched at startup without UAC prompt?

    - by iKenndac
    Background I've written a small C#/.NET 4.0 application that syncs various settings from a game installed in Program Files to and from other copies of the same game on different machines (think Chrome bookmark sync, but for this game). The sync itself is a relatively simple affair, dealing with files stored inside the game's Program Files folder. On my machine, this works fine without having to elevate my application through UAC. Windows 7 makes the game use Program Files virtualisation and my application works fine with that. However, on a lot of tester's machines, I'm getting reports that the application either can't work with the files and in come cases can't even see the game's folder! Having the user right-click and "Run as Administrator" solves the problem in every case. So, we just set the application's manifest to require admin privileges, right? That's fine (although not ideal) for when the user manually invokes the application or the sync process because they'll be interacting with the application and ready to accept a UAC request. However, one of the features of my application is a "Sync Automatically" option, which allows the user to "set and forget" the application. With this set, the application puts itself into the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Run to be run at startup and sits in the system tray syncing the settings in the background as needed. Obviously, I need to be smarter here. Presenting a UAC prompt as soon as the user logs in to their account or at random intervals afterwards isn't the way forwards. So, my question! What's the best way to approach a situation where I'd need to run an application at startup that needs administrator privileges? Is there a way to have the user authorise an installation that causes the system to automatically run the application with the correct privileges without a prompt at startup/login?

    Read the article

  • Cannot override the CSS at this site

    - by gdanko
    This site is overriding my CSS with its own and I cannot get around it! It has style.css with "text-align: center" in the body. I have <div id="mydiv"> appended to the body and it's normally got "text-align: left". There are <ul>s and <li>s underneath #mydiv and they are inheriting the body's 'center' for some reason. I tried this and it's still not working. $('#mydiv').children().css('text-align', 'auto'); How the heck do I reclaim my CSS!? @Grillz, the HTML looks like this: <div id="mydiv"> <ul class="container"> <li rel="folder" class="category"><a href="#">category1</a> <ul><li rel="file" class="subcategory"><a href="#">subcategory1</a></li></ul> <ul><li rel="file" class="subcategory"><a href="#">subcategory2</a></li></ul> </li> <li rel="folder" class="category"><a href="#">category2</a> <ul><li rel="file" class="subcategory"><a href="#">subcategory3</a></li></ul> <ul><li rel="file" class="subcategory"><a href="#">subcategory4</a></li></ul> </li> </ul>

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • Designers, Expression or SharePoint Designer, and real source control

    - by David Lively
    I'm trying desperately to move from VSS to a real source control system. Options include TFS and SVN. My designers need to keep their ability to modify source files and instantly preview their changes in a browser without having to commit their changes. Using FPSE with VSS, this works flawlessly, since saving a file causes the copy in the working folder on the dev server to be updated, so they can just save and refresh their browser which is pointed at the dev server. The site in question consists of 350k+ lines of classic ASP code and some new ASP.NET MVC. They only need to be able to modify views within the MVC code, not C#. Though Expression includes a version of Cassini for local debugging, Cassini does not support classic ASP. Surely someone has solved this problem before. It can't be necessary to install IIS on each designer's machine (this is absolutely untenable). I need a way to have a common working folder on a dev webserver updated whenever someone saves a file locally, just like using FPSE. I'd rather not write an FPSE proxy that knows how to talk to TFS/SVN. Any suggestions? (I know I've asked this question in the past, but I haven't yet found a solution.)

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Please tell me what is wrong with my threading!!!

    - by kiddo
    I have a function where I will compress a bunch of files into a single compressed file..it is taking a long time(to compress),so I tried implementing threading in my application..Say if I have 20 files for compression,I separated that as 5*4=20,inorder to do that I have separate variables(which are used for compression) for all 4 threads in order to avoid locks and I will wait until the 4 thread finishes..Now..the threads are working but i see no improvement in their performance..normally it will take 1 min for 20 files(for example) after implementing threading ...there is only 5 or 3 sec difference., sometimes the same. here i will show the code for 1 thread(so it is for other3 threads) //main thread myClassObject->thread1 = AfxBeginThread((AFX_THREADPROC)MyThreadFunction1,myClassObject); .... HANDLE threadHandles[4]; threadHandles[0] = myClassObject->thread1->m_hThread; .... WaitForSingleObject(myClassObject->thread1->m_hThread,INFINITE); UINT MyThreadFunction(LPARAM lparam) { CMerger* myClassObject = (CMerger*)lparam; CString outputPath = myClassObject->compressedFilePath.GetAt(0);//contains the o/p path wchar_t* compressInputData[] = {myClassObject->thread1outPath, COMPRESS,(wchar_t*)(LPCTSTR)(outputPath)}; HINSTANCE loadmyDll; loadmydll = LoadLibrary(myClassObject->thread1outPath); fp_Decompress callCompressAction = NULL; int getCompressResult=0; myClassObject->MyCompressFunction(compressInputData,loadClient7zdll,callCompressAction,myClassObject->thread1outPath, getCompressResult,minIndex,myClassObject->firstThread,myClassObject); return 0; }

    Read the article

  • How to handle javascript & css files across a site?

    - by Industrial
    Hi everybody, I have had some thoughts recently on how to handle shared javascript and css files across a web application. In a current web application that I am working on, I got quite a large number of different javascripts and css files that are placed in an folder on the server. Some of the files are reused, while others are not. In a production site, it's quite stupid to have a high number of HTTP requests and many kilobytes of unnecessary javascript and redundant css being loaded. The solution to that is of course to create one big bundled file per page that only contains the necessary information, which then is minimized and sent compressed (GZIP) to the client. There's no worries to create a bundle of javascript files and minimize them manually if you were going to do it once, but since the app is continuously maintained and things do change and develop, it quite soon becomes a headache to do this manually while pushing out new updates that features changes to javascripts and/or css files to production. What's a good approach to handle this? How do you handle this in your application?

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >