Search Results

Search found 38641 results on 1546 pages for 'locate files'.

Page 438/1546 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • Using IAM for user authentication

    - by mdavis6890
    I've read lots and lots of posts that touch on what I think should be a very common use case - but without finding exactly what I want, or a simple reason why it can't be done. I have some files on S3. I want to be able to grant certain users access to certain files, via a front end that I build. So far, I've made it work this way: I built the front end in Django, using it's built-in Users and Groups I have a model for Buckets, in which I mirror my S3 buckets. I have a m2m relationship from groups to buckets representing the S3 permissions. The user logs in and authenticates against Django's users. I grab from Django the list of buckets that the user is allowed to see I use boto to grab a list of links to files from those buckets and display to user. This works, but isn't ideal, and also just doesn't feel right. I've got to keep a mirror of the buckets, and I also have to maintain my own list of user/passwords and permissions, when AWS already has all that built in. What I really want is to simply create the users in IAM and use group permissions in IAM to control access to the S3 buckets. No duplication of data or function. My app would request a UN/PW from the user and use that to connect to IAM/S3 to pull the list of buckets and files, then display links to the user. Simple. How can I, or why can't I? Am I looking at this the wrong way? What's the "right" way to address this (I assume) very common use case?

    Read the article

  • PHP gettext on Windows

    - by Axsuul
    There's some tutorials out there for gettext (w/ Poedit)... unfortunately, it's mostly for a UNIX environment. And even more unfortunate is that I am running my WAMP server on Windows XP (but I am developing for a UNIX environment) and none of the tutorials can get gettext working properly for me. From the man page (http://us3.php.net/manual/en/book.gettext.php), it appears that it's a different process on a Windows environment. I've tried out some of the solutions in the comments but I still can't get it to work! Please, I've spent many hours on this, hopefully someone can point me in the right direction to get this thing to work! (and I'm sure there are others out there who share my frustration). So far with my setup, I'm only getting output "Hello World!" whereas I should be getting the translated string. Here is my setup/code so far: <?php // test.php if (!defined('LC_MESSAGES')) { define('LC_MESSAGES', 6); } $locale = "deu_DEU"; // apparently the locales are different on a WINDOWS platform putenv("LC_ALL=$locale"); setlocale(LC_ALL, $locale); bindtextdomain("greetings", ".\locale"); textdomain("greetings"); echo _("Hello World"); ?> Folder structure root: C:\Program Files\WampServer 2\www test.php: C:\Program Files\WampServer 2\www\site .po: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.po .mo: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.mo Please advise! Thanks for your time :)

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Looking for a web based, embeddable file manager

    - by Kristi H.
    I'm looking for a web based, embeddable file manager. I haven't found anything suitable through Google or the Stack Overflow archives. Does anyone know of a file manager that meets the following criteria? The file manager must… …be embeddable (e.g. Flash or a Java applet) …run from my server (no storing uploads in a remote location) …be licensed for business use (fees are okay) …let me disable uploads. Users shouldn't be able to upload files, only download them …allow me to tag files and/or place them in folders …let me disable authentication or handle it from the backend. My app already has a login system and I don't want to make users log in twice …allow users to select and download multiple files …have an attractive interface The file manager should… …have an external config file …display thumbnail previews for files …be skinnable …be actively developed or at least actively maintained Thanks for your help. I need a sophisticated file manager and I'd like to avoid writing it from scratch.

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • Giving writing permissions for IIS user at Windows 2003 Server

    - by Steve
    I am running a website over Windows 2003 Server and IIS6 and I am having problems to write or delete files in some temporary folder obtaining this kind of warmings: Warning: unlink(C:\Inetpub\wwwroot\cakephp\app\tmp\cache\persistent\myapp_cake_core_cake_): Permission denied in C:\Inetpub\wwwroot\cakephp\lib\Cake\Cache\Engine\FileEngine.php on line 254 I went to the tmp directory and at the properties I gave the IIS User the following permissions: Read & Execute List folder Contents Read And it still showing the same warnings. When I am on the properties window, if I click on Advanced the IIS username appears twice. One with Allow type and read & execute permissions and the other with Deny type and Special permissions. My question is: Should I give this user not only the Read & Execute permissions but also this ones?: Create Attributes Create Files/ Write Data Create Folders/ Append Data Delete Subfolders and Files Delete They are available to select if I Click on the edit button over the username. Wouldn't I be opening a security hole if I do this? Otherwise, how can I do to read and delete the files my website uses? Thanks.

    Read the article

  • Pymedia video encoding failed

    - by user1474837
    I am using Python 2.5 with Windows XP. I am trying to make a list of pygame images into a video file using this function. I found the function on the internet and edited it. It worked at first, than it stopped working. This is what it printed out: Making video... Formating 114 Frames... starting loop making encoder Frame 1 process 1 Frame 1 process 2 Frame 1 process 2.5 This is the error: Traceback (most recent call last): File "ScreenCapture.py", line 202, in <module> makeVideoUpdated(record_files, video_file) File "ScreenCapture.py", line 151, in makeVideoUpdated d = enc.encode(da) pymedia.video.vcodec.VCodecError: Failed to encode frame( error code is 0 ) This is my code: def makeVideoUpdated(files, outFile, outCodec='mpeg1video', info1=0.1): fw = open(outFile, 'wb') if (fw == None) : print "Cannot open file " + outFile return if outCodec == 'mpeg1video' : bitrate= 2700000 else: bitrate= 9800000 start = time.time() enc = None frame = 1 print "Formating "+str(len(files))+" Frames..." print "starting loop" for img in files: if enc == None: print "making encoder" params= {'type': 0, 'gop_size': 12, 'frame_rate_base': 125, 'max_b_frames': 90, 'height': img.get_height(), 'width': img.get_width(), 'frame_rate': 90, 'deinterlace': 0, 'bitrate': bitrate, 'id': vcodec.getCodecID(outCodec) } enc = vcodec.Encoder(params) # Create VFrame print "Frame "+str(frame)+" process 1" bmpFrame= vcodec.VFrame(vcodec.formats.PIX_FMT_RGB24, img.get_size(), # Covert image to 24bit RGB (pygame.image.tostring(img, "RGB"), None, None) ) print "Frame "+str(frame)+" process 2" # Convert to YUV, then codec da = bmpFrame.convert(vcodec.formats.PIX_FMT_YUV420P) print "Frame "+str(frame)+" process 2.5" d = enc.encode(da) #THIS IS WHERE IT STOPS print "Frame "+str(frame)+" process 3" fw.write(d.data) print "Frame "+str(frame)+" process 4" frame += 1 print "savng file" fw.close() Could somebody tell me why I have this error and possibly how to fix it? The files argument is a list of pygame images, outFile is a path, outCodec is default, and info1 is not used anymore. UPDATE 1 This is the code I used to make that list of pygame images. from PIL import ImageGrab import time, pygame pygame.init() f = [] #This is the list that contains the images fps = 1 for n in range(1, 100): info = ImageGrab.grab() size = info.size mode = info.mode data = info.tostring() info = pygame.image.fromstring(data, size, mode) f.append(info) time.sleep(fps)

    Read the article

  • Pull/Clone a svn repository into hg with new default branch name?

    - by TheLQ
    I'm forking a project's SVN repo and need to integrate into my Mercurial repo. To keep things simple I have a local hgsubversion repo and a local hg repo. However both the mercurial and hgsubversion repo uses default as their default branch name. My goal here is to put the original code and updates on one branch and my code on the default branch However I have yet to be able to do this. W:\programming\tcsite-svn-test>hg clone http://*HG_SITE*/hg . no changes found updating to branch default 0 files updated, 0 files merged, 0 files removed, 0 files unresolved W:\programming\tcsite-svn-test>hg branch blizzard marked working directory as branch blizzard W:\programming\tcsite-svn-test>hg commit W:\programming\tcsite-svn-test>hg log changeset: 0:be13a9580df0 branch: blizzard tag: tip user: Leon Blakey <[email protected]> date: Fri Jan 14 23:44:25 2011 -0500 summary: Created Blizzard Branch W:\programming\tcsite-svn-test>hg pull http://*SVN_SITE*/svn/ pulling from http://*SVN_SITE*/svn/ .... pulled 23 revisions (run 'hg update' to get a working copy) W:\programming\tcsite-svn-test>hg branch blizzard W:\programming\tcsite-svn-test>hg branches default 23:93642a8890ab <------ blizzard 0:be13a9580df0 Not surprisingly, hgsubversion puts pulled commits into the default branch when I really need them in the blizzard branch. From the docs, there is no way to rename the branch that a commit came from. Frustratingly I can't even come up with a way to do it on a repo with only the hgsubversion repo being pulled from, nothing else. All commits are tied to that one branch no matter what. Is there any suggestions on how to pull changes from an SVN repo and rename the branch to something else?

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • How to get OCI lib to work on red hat machine to work with R Oracle?

    - by Matt Bannert
    I need to get OCI lib working on my rhel 6.3 machine and I am experiencing some trouble with OCI headers files that can't be found. I have installed (using yum install) oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm because this official page it's all I need to run OCI. To test the whole thing in general I've installed sqplus64, which worked after I set export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib. Unfortunately the headers files couldn't be found after setting LD_LIBRARY_PATH. Actually I am not surprised because there is no include directory in any of these oracle paths. So the question is: Where do I get these missing header files from? Are they actually already there and I just can find them? Btw: I am doing this whole exercise because I want to use ROracle on my R Studio server and this R package depends on the OCI library. Once I am back in R territory the road gets much less bumpier for me. EDIT: this documentation helped me a little further. However, I guess I found some header files now in: "/usr/include/oracle/11.2/client64". But which variable do I have to set to this location?

    Read the article

  • asp file system object

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %<% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize % <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "" & sDir & "" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number < 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "Parent folder" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "" & objSubFolder.Name & "" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "" & objFile.Name & "" & sSize & "" & objFile.Type & "" & vbCRLF Next Response.Write "" % it works fine. but when i try to access something on shred path like: "\cvrdd0110:share" it gives error. how to access these files?

    Read the article

  • Ant: use include and exclude together

    - by Ken
    OK, this seems like it should be really simple. I'm using Apache Ant 1.8, and I have a target which does: <delete file="output/program.tar.bz2"/> <tar basedir="input" destfile="output/program.tar.bz2" compression="bzip2"> <tarfileset dir="input"> <include name="goodfolder1/**"/> <include name="goodfolder2/**"/> <exclude name="**/badfile"/> <exclude name="**/*.badext"/> </tarfileset> </tar> I want it to make a .tar.bz2 of input/goodfolder1 and input/goodfolder2, excluding files named "badfile", and excluding files with extension ".badext". It's giving me a .tar.bz2, but it's including badfile and *.badext -- the excludes seem to be ignored. The order of include/exclude doesn't seem to make a difference. I tried wrapping the includes/excludes in a (the docs say it's implicit?), but it made no difference. I'm sure there's something simple I'm missing, since the manual has a very similar example, though in a somewhat different context. EDIT: It looks like it could be related to the dir="input" attribute: it's adding everything in "input", and then adding everything in the tarfileset to that. Files I want appear twice in the program.tar.bz2, but files that are excluded only appear once. But dir is mandatory, and I don't see how this is different from the examples in the manual.

    Read the article

  • How to download a file from a UNC mapped share via IIS and ASP

    - by helgeg
    I am writing an ASP application that will serve files to clients through the browser. The files are located on a file server that is available from the machine IIS is running on via a UNC path (\server\some\path). I want to use something like the code below to serve the file. Serving files that are local to the machine IIS is running on is working well with this method, my trouble is being able to serve files from the UNC mapped share: //Set the appropriate ContentType. Response.ContentType = "Application/pdf"; //Get the physical path to the file. string FilePath = MapPath("acrobat.pdf"); //Write the file directly to the HTTP content output stream. Response.WriteFile(FilePath); Response.End(); My question is how I can specify a UNC path for the file name. Also, to access the file share I need to connect with a specific username/password. I would appreciate some pointers on how I can achieve this (either using the approach above or by other means).

    Read the article

  • Where should an application's default folder live?

    - by HotOil
    Hi: I'm creating a little app that configures a connected device and then saves the config information in a file. The filename cannot be chosen by the user, but its location can be chosen. Where is the best place for the app's default save-to folder? I have seen examples out there where it is the "MyDocuments" location (eg Visual Studio does this). I have seen a folder created right at the top of the C:\ drive. I find that to be a little obnoxious, personally. It could be in the Program Files[Manufacturer] or Program Files[Product Name], or wherever the app was installed. I have used this location in the past; I dislike it because Windows Explorer does not allow a user to browse to there very easily ('browsability'). Going with this last notion that 'browsability' is a factor, I suppose MyDocuments is the best choice. Is this the most common, most widely accepted practice? I think historically we have chosen the install folder because that co-locates the data with the device management utilities. But I would really like to get away from that. I don't want the user to have to go pawing through system files to find his/her data, esp if that person is not too Windows-savvy. Also, I am using the .NET WinForms FolderBrowserDialog, and the "Environment.SpecialFolders" enum isn't helpful in setting up the dialog to point into the Program Files folder. Thanks for your input! Suz.

    Read the article

  • NoSQL for filesystem storage organization and replication?

    - by wheaties
    We've been discussing design of a data warehouse strategy within our group for meeting testing, reproducibility, and data syncing requirements. One of the suggested ideas is to adapt a NoSQL approach using an existing tool rather than try to re-implement a whole lot of the same on a file system. I don't know if a NoSQL approach is even the best approach to what we're trying to accomplish but perhaps if I describe what we need/want you all can help. Most of our files are large, 50+ Gig in size, held in a proprietary, third-party format. We need to be able to access each file by a name/date/source/time/artifact combination. Essentially a key-value pair style look-up. When we query for a file, we don't want to have to load all of it into memory. They're really too large and would swamp our server. We want to be able to somehow get a reference to the file and then use a proprietary, third-party API to ingest portions of it. We want to easily add, remove, and export files from storage. We'd like to set up automatic file replication between two servers (we can write a script for this.) That is, sync the contents of one server with another. We don't need a distributed system where it only appears as if we have one server. We'd like complete replication. We also have other smaller files that have a tree type relationship with the Big files. One file's content will point to the next and so on, and so on. It's not a "spoked wheel," it's a full blown tree. We'd prefer a Python, C or C++ API to work with a system like this but most of us are experienced with a variety of languages. We don't mind as long as it works, gets the job done, and saves us time. What you think? Is there something out there like this?

    Read the article

  • Writing an audio player in C#

    - by Malki
    Hi, I have a pretty cool idea for a very special media player. I like to think about this project as a mini-startup, since I don't yet know if my idea is practical. Anyways, before implementing my idea, I first need to be able to implement a simple audio player. My preferred language for this project is C#, simply because it's so easy to use, but any other object oriented language would be fine too I guess. I started out with no knowledge whatsoever about audio. My main goals right now are: Being able to play audio files - as many formats as possible (sort of a VLC type player, but only audio for now). Being able to analyze audio files - as in, reading frequency, amplitude, volume, and other information about the audio. I think maybe a good idea here is to be able to analyze one file format (PCM?), and then temporarily converting any file I want to analyze to that format. This is in order to later implement a mechanism that compares songs and identifies similar songs to recommend to the user (this feature isn't part of my idea, but I figured since it exists in many players nowadays, I need to have it too if I want be able to compete with them). BTW - I currently don't have any knowledge about audio/wavelengths/frequencies and such, so I'd appreciate it if someone could point me in the right direction about this analyzation feature. Maybe in the future I'd expand to playing video files as well, but for now I'm concentrating on audio. After searching the Internet for a while, I've come across LAME. Problem is, it's not C#, and I'm not sure how to use it. I know there is something called "Interoperability", that is supposed to let me work with native DLL files through C#. Any information about that would be helpful as well. Any help would be much appreciated. Thanks, Malki :)

    Read the article

  • How to determine a text block of a file in one version come from which file in the previous version?

    - by Muhammad Asaduzzaman
    The problem is described below: Suppose I have a list of files in one version(say A,B,C,D). In the next version I have the following files(A,E,F,G). There are some similarities in their contents. The files in the later version comes from the previous version by file name renaming, content addition, deletion or partial modification or without any change( for example A is not changed). I take a block of text from a file(E, 2nd version) and check which files(in the 1st version) contain this text block. I found that B,C and D contain the text fragment. I want to determine from which file(B or c or d) this text block actually comes from.(I assume that E is a file whose name change in the second version). Since the contents may be changed, added or deleted in the later version, so in order to determine similarity I use LCS algorithm. But I cannot map the file with its previous version. I think one possible approach might be to use the location information of the match text blocks. But this heuristics not always work. Is there any research or algorithm exist to find so. Any direction will be helpful. Thanks in advance.

    Read the article

  • S3 file Uploading from Mac app though PHP?

    - by Ilija Tovilo
    I have asked this question before, but it was deleted due too little information. I'll try to be more concrete this time. I have an Objective-C mac application, which should allow users to upload files to S3-storage. The s3 storage is mine, the users don't have an Amazon account. Until now, the files were uploaded directly to the amazon servers. After thinking some more about it, it wasn't really a great concept, regarding security and flexibility. I want to add a server in between. The user should authenticate with my server, the server would open a session if the authentication was successful, and the file-sharing could begin. Now my question. I want to upload the files to S3. One option would be to make a POST-request and wait until the server would receive the file. Problems here are, that there would be a delay, when the file is being uploaded from my server to the S3 servers, and it would double the uploading time. Best would be, if I could validate the request, and then redirecting it, so the client uploads it directly to the s3-storage. Not sure if this is possible somehow. Uploading directly to S3 doesn't seem to be very smart. After looking into other apps like Droplr and Dropmark, it looks like they don't do this. Btw. I did this using Little Snitch. They have their api on their own web-server, and that's it. Could someone clear things up for me? EDIT How should I transmit my files to S3? Is there a way to "forward" it, or do I have to upload it to my server and then upload it from there to S3? Like I said, other apps can do this efficiently and without the need of communicating with S3 directly.

    Read the article

  • jQuery - Creating a dynamic content loader using $.get()

    - by Kenny Bones
    Hello everybody! (hello dr.Nick) :) So I posted a question yesterday about a content loader plugin for jQuery I thought I'd use, but didn't get it to work. http://stackoverflow.com/questions/2469291/jquery-could-use-a-little-help-with-a-content-loader Although it works now, I see some disadvantages to it. It requires heaploads of files where the content is in. Since the code essentially picks up the url in the href link and searches that file for a div called #content What I would really like to do is to collect all of these files into a single file and give each div/container it's unique ID and just pick up the content from those. So i won't need so many separate files laying around. Nick Craver thought I should use $.get()instead since it's got a descent callback. But I'm not that strong in js at all.. And I don't even know what this means. I'm basically used to Visual Basic and passing of arguments, storing in txt files etc. Which is really not suitable for this purpose. So what's the "normal" way of doing things like this? I'm pretty sure I'm not the only one who's thought of this right? I basically want to get content from a single php file that contains alot of divs with unique IDs. And without much hassle, fade out the existing content in my main page, pick up the contents from the other file and fade it into my main page.

    Read the article

  • Does XAML work with file links in Visual Studio?

    - by Tim
    I'm adding a new WPF project to an existing Visual Studio solution and would like to reuse a bunch of code (C# and xaml) from an existing project within the solution. I've created the new project and added existing files as follows: Right click project Add - Add Existing Item Find the file to reuse, use the arrow next to "Add" and "Add as Link" I now have a nice project set up with all the proper links. However, XAML chokes on these links. For example: <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Resources\Elements\Buttons\Buttons.xaml" /> <ResourceDictionary Source="Resources\Elements\TextBox\TextBox.xaml" /> </ResourceDictionary.MergedDictionaries> The files "Buttons.xaml" and "TextBox.xaml" exist as links in my new project. The project builds, but when I run, I get the following XamlParseException: 'Resources\Elements\Buttons\Buttons.xaml' value cannot be assigned to property 'Source' of object 'System.Windows.ResourceDictionary'. Cannot locate resource 'resources/elements/buttons/buttons.xaml'. It seems like the XAML parser is requiring an actual copy of these XAML files to exist in my new project, instead of links. This is exactly what I'm trying to avoid. I want my project to share these files so that any changes get transferred to the other project without hunting and copying. Any insight is appreciated!

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • Browser download file prompt using javascript

    - by aix
    hi, I was wondering if there was any method to implement browser's download file prompt using javascript. My reason - well users will be uploading files to a local fileserver which cannot be accessed from the webserver. In other words, both will be on different domains! e.g. let’s say websites hosted on www.xyz.com, but files would reside on local file server with address like \10.10.10.01\Files\file.txt. How am I uploading/transferring file to local fileserver... using ACTIVEX(yikes) & VBscript!!!! (don’t ask :-) so i am storing local file path in my database and binding that data to a grid. So when the user clicks on that link, the file opens in a window (using javascript). Problem is certain file types like text, jpg, pdf, etc open inside browser window. How would i be able to implement content-type or content-disposition using client side scripting? Is that even possible? hoping my description was clear. Any ideas? EDIT: the local file server has a window's shared folder on which the files are saved.

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >