Search Results

Search found 72103 results on 2885 pages for 'file storage'.

Page 172/2885 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • IIS WebServer CreatesNew file: OwnerShip?

    - by Beaud.
    IIS is configured for Integrated Windows Authentication. web.config is configured as follows: <authentication mode="Windows" /> <identity impersonate="true" /> We are Load balancing between \webserver1 and \webserver2. Windows Server 2003 \\webserverX creates a XML file to \\share1 and access is denied. We got pass through access denial by allowing Everyon to access the share... We would like to have the impersonated user to be the owner of the created file. Instead, \\webserver1's computer account is the owner. How can we make sure that the impersonated user has ownership of the file at creation time? PROGRESSION: I decided to create the file locally on \\webserver1's root directory. File's ownership is NETWORK SERVICES even if impersonate="true". I'm unable to change ownership of the file in C# code. Why when creating a file, IIS won't use the impersonated user's write permissions? If it actually does, what I am doing wrong?

    Read the article

  • Dir all files to output.log, including message "File Not Found"

    - by user316687
    I'm trying to dir a bunch of files with dos command: dir My dir.bat file: dir E:\documentos\57\Asiento\01\"Asiento 3 Modificacion de Estatuto.doc" dir E:\documentos\134\Asiento\01\"File Does Not Exist.doc" dir E:\documentos\55\Asiento\01\"Asiento 5 Padron de Afiliados Segunda Entrega.doc" The second one doesn't exist. Then, when running my bat: C:\myuser>E:\dir.bat > output.log I open output.log and don't find any message about the file that was not found. Output.log : E:\documentos>dir E:\documentos\57\Asiento\01\"Asiento 3 Modificacion de Estatuto.doc" Volume in drive E is New Volume Volume Serial Number is 0027-F7F6 Directory of E:\documentos\57\Asiento\01 20/12/2005 06:41 p.m. 40,960 Asiento 3 Modificacion de Estatuto.doc 1 File(s) 40,960 bytes 0 Dir(s) 17,053,155,328 bytes free E:\documentos>dir E:\documentos\134\Asiento\01\"File Does Not Exist.doc" Volume in drive E is New Volume Volume Serial Number is 0027-F7F6 Directory of E:\documentos\134\Asiento\01 E:\documentos>dir E:\documentos\55\Asiento\01\"Asiento 5 Padron de Afiliados Segunda Entrega.doc" Volume in drive E is New Volume Volume Serial Number is 0027-F7F6 Directory of E:\documentos\55\Asiento\01 08/08/2007 08:33 a.m. 40,960 Asiento 5 Padron de Afiliados Segunda Entrega.doc 1 File(s) 40,960 bytes 0 Dir(s) 17,053,151,232 bytes free Is there any way that output.log shows "File Not Found" message?

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • Visual Studio opening .xml files in Notepad

    - by Portman
    So I'm happily working on a project making heavy use of custom .xml configuration files this morning. All of a sudden, whenever I double-click an .xml file in Solution Explorer, it opens in Notepad instead of within Visual Studio. Thinking that it was the Windows file associations, I right-clicked on a file in Explorer, selected Open With Choose Defaults, and selected Visual Studio 2008. But the problem remains -- now when I open a file from Explorer, Visual Studio Opens, then it opens Notepad. Needless to say, this is very frustrating, and Google is not much help. Has anyone else ever had this problem, and what did you do about it? Notes: This only happens for .xml files. Other text files (.config, .txt) open within Visual Studio just fine. This has nothing to do with Windows file associations, as Windows open up VS2008 just as it should. This is some crazy problem internal to Visual Studio. I've also tried Tools Options General Restore File Associations. No luck. Nothing present in Tools Options Text Editor File Extension This is what my "Open With" menu looks like for .xml files. As you can see, "XML Editor" is set to the default.

    Read the article

  • Python: slow read & write for millions of small files

    - by Jami
    I am building directory tree which has tons of subdirectories and files. The total directory count is somewhere along 256^32 subdirectories with 256 files in each end which are only a few bytes long. I did this so I would have fast access to these files (since i'm not searching and i'm just directly accessing then via a known file path) I have a python script that builds this filesystem and reads & writes those files. The problem is that when I reach more than 1Gb of total filesize, the read and write methods become extremely slow. Here's the function I have that reads the contents of a file (the file contains an integer string), adds a certain number to it, then writes it back to the original file. def addInFile(path, scoreToAdd): num = scoreToAdd try: shutil.copyfile(path, '/tmp/tmp.txt') fp = open('/tmp/tmp.txt', 'r') num += int(fp.readlines()[0]) fp.close() except: pass fp = open('/tmp/tmp.txt', 'w') fp.write(str(num)) fp.close() shutil.copyfile('/tmp/tmp.txt', path) I previously tried performing linux console commands but it was slower. I copy the file to a temporary file first then access/modify it then copy it back because i found this was faster than directly accessing the file. I think the cause of the slowdown is because there're tons of files. performing this function 1000 times sometimes reach 1 minute now, but before (when there were only a few files, 1000 calls was performed for only less than 1 second) How do you suggest I fix this?

    Read the article

  • upload picture to and send path to database

    - by opa.mido
    I am trying to upload picture and link it to my database and I used the codes below : Upload2.php <?php // Check if a file has been uploaded if(isset($_FILES['uploaded_file'])) { // Make sure the file was sent without errors if($_FILES['uploaded_file']['error'] == 0) { // Connect to the database $dbLink = new mysqli('localhost', 'root', '1234', 'fyp'); if(mysqli_connect_errno()) { die("MySQL connection failed: ". mysqli_connect_error()); } // Gather all required data $name = $dbLink->real_escape_string($_FILES['uploaded_file']['name']); $mime = $dbLink->real_escape_string($_FILES['uploaded_file']['type']); $data = $dbLink->real_escape_string(file_get_contents($_FILES ['uploaded_file']['tmp_name'])); $size = intval($_FILES['uploaded_file']['size']); // Create the SQL query $query = " INSERT INTO `pic` ( `Name`, `size`, `data`, `created` ) VALUES ( '{$name}', '{$mime}', {$size}, '{$data}', NOW() )"; // Execute the query $result = $dbLink->query($query); // Check if it was successfull if($result) { echo 'Success! Your file was successfully added!'; } else { echo 'Error! Failed to insert the file' . "<pre>{$dbLink->error}</pre>"; } } else { echo 'An error accured while the file was being uploaded. ' . 'Error code: '. intval($_FILES['uploaded_file']['error']); } // Close the mysql connection $dbLink->close(); } else { echo 'Error! A file was not sent!'; } // Echo a link back to the main page echo '<p>Click <a href="index.html">here</a> to go back</p>'; ?> I reached the last stage and its give error a file was not sent I don't know where I have missed. thank you

    Read the article

  • Access/Download server files, not in site root, with PHP

    - by user271619
    Usually I save documents (images, mpegs, excel, word docs, etc...) for my friends or family on my website's root, inside a directory called /files/ or something similar. Nothing too uncommon. But, I have been playing with user session control, and allowing users to upload files to the dedicated /files/ directory. (the file names are saved in a db, with that user's ID) But, that means other people could try to guess and locate other people's files. I do randomize the file names, upon upload. And I stop the apache from displaying the /files/ directory content. However, I'd like to start saving the files outside of the website's root. This way it can't be accessible via the browser. I don't have any code to show, but I didn't want to even start on this endeavor if it's not able to be accomplished. I did find this snippet that shows how to display an image, from outside your website root: $file = $_GET['file']; $fileDir = '/path/to/files/'; if (file_exists($fileDir . $file)) { // Note: You should probably do some more checks // on the filetype, size, etc. $contents = file_get_contents($fileDir . $file); // Note: You should probably implement some kind // of check on filetype header('Content-type: image/jpeg'); echo $contents; } ? Maybe I can use this for any file type, but has anyone heard of a better way to allow users (logged in) to access their files from online, but not letting other users has similar access?

    Read the article

  • Random-access archive for Unix use

    - by tylerl
    I'm looking for a good format for archiving entire file-systems of old Linux computers. TAR.GZ The tar.gz format is great for archiving files with UNIX-style attributes, but since the compression is applied across the entire archive, the design precludes random-access. Instead, if you want to access a file at the end of the archive, you have to start at the beginning and decompress the whole file (which could be several hundred GB) up to the point where you find the entry you're looking for. ZIP Conversely, one selling point of the ZIP format is that it stores an index of the archive: filenames are stored separately with pointers to the location within the archive were to find the data. If I want to extract a file at the end, I look up the position of that file by name, seek to the location, and extract the data. However, it doesn't store file attributes such as ownership, permissions, symbolic links, etc. Other options? I've tried using squashfs, but it's not really designed for this purpose. The file format is not consistent between versions, and building the archive takes a lot of time and space. What other options might suit this purpose better?

    Read the article

  • BASH: How to count all the human readable files?

    - by user1687406
    I'm taking an intro course to UNIX and have a homework question that follows: How many files in the previous question are text files? A text file is any file containing human-readable content. (TRICK QUESTION. Run the file command on a file to see whether the file is a text file or a binary data file! If you simply count the number of files with the ".txt" extension you will get no points for this question.) The previous question simply asked how many regular files there were, which was easy to figure out by doing find . -type f | wc -l I'm just having trouble determining what "human readable content" is, since I'm assuming it means anything besides binary/assembly, but I thought that's what -type f displays. Maybe that's what the professor meant by saying "trick question"? This question has a follow up later that also asks "What text files contain the string "csc" in any mix of upper and lower case?". Obviously "text" is referring to more than just .txt files, but I need to figure out the first question to determine this!

    Read the article

  • Writing a plist

    - by iOS-Newbie
    I am trying to test out writing a dictionary to a plist. The following code does not report any errors, but I cannot find any trace of the file that I supposed wrote. Here is the code snippet: NSDictionary *myDictionary = [NSDictionary dictionaryWithObjectsAndKeys: @"First letter of the alphabet", @"A", @"Second letter of the alphabet", @"B", @"Third letter of the alphabet", @"C", nil ]; I can see the dictionary contents displayed properly with either method calls: NSLog(@"Here is my partial dictionary %@", myDictionary); for (NSString *key in myDictionary) NSLog(@"here it is again %@ %@", key, [myDictionary objectForKey:key]); The following code displays the "succeeded" message when the program is run repeatedly if ([myDictionary writeToFile: @"myDictionary" atomically:YES ] == NO) NSLog(@"write to file failed"); else NSLog(@"write to file succeeded"); even when changing the atomically: argument to NO to not write a temporary file. However, when I search my current directory, or even my entire Mac, I cannot find any file called "myDictionary.plist" or any file with the string "myDictionary". Isn't the path variable "@myDictionary" supposed to represent the file at the current directory, i.e. where the class executable resides?

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Generic class for performing mass-parallel queries. Feedback?

    - by Aaron
    I don't understand why, but there appears to be no mechanism in the client library for performing many queries in parallel for Windows Azure Table Storage. I've created a template class that can be used to save considerable time, and you're welcome to use it however you wish. I would appreciate however, if you could pick it apart, and provide feedback on how to improve this class. public class AsyncDataQuery<T> where T: new() { public AsyncDataQuery(bool preserve_order) { m_preserve_order = preserve_order; this.Queries = new List<CloudTableQuery<T>>(1000); } public void AddQuery(IQueryable<T> query) { var data_query = (DataServiceQuery<T>)query; var uri = data_query.RequestUri; // required this.Queries.Add(new CloudTableQuery<T>(data_query)); } /// <summary> /// Blocking but still optimized. /// </summary> public List<T> Execute() { this.BeginAsync(); return this.EndAsync(); } public void BeginAsync() { if (m_preserve_order == true) { this.Items = new List<T>(Queries.Count); for (var i = 0; i < Queries.Count; i++) { this.Items.Add(new T()); } } else { this.Items = new List<T>(Queries.Count * 2); } m_wait = new ManualResetEvent(false); for (var i = 0; i < Queries.Count; i++) { var query = Queries[i]; query.BeginExecuteSegmented(callback, i); } } public List<T> EndAsync() { m_wait.WaitOne(); return this.Items; } private List<T> Items { get; set; } private List<CloudTableQuery<T>> Queries { get; set; } private bool m_preserve_order; private ManualResetEvent m_wait; private int m_completed = 0; private void callback(IAsyncResult ar) { int i = (int)ar.AsyncState; CloudTableQuery<T> query = Queries[i]; var response = query.EndExecuteSegmented(ar); if (m_preserve_order == true) { // preserve ordering only supports one result per query this.Items[i] = response.Results.First(); } else { // add any number of items this.Items.AddRange(response.Results); } if (response.HasMoreResults == true) { // more data to pull query.BeginExecuteSegmented(response.ContinuationToken, callback, i); return; } m_completed = Interlocked.Increment(ref m_completed); if (m_completed == Queries.Count) { m_wait.Set(); } } }

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • ASP.NET Performance tip- Combine multiple script file into one request with script manager

    - by Jalpesh P. Vadgama
    We all need java script for our web application and we storing our JavaScript code in .js files. Now If we have more then .js file then our browser will create a new request for each .js file. Which is a little overhead in terms of performance. If you have very big enterprise application you will have so much over head for this. Asp.net Script Manager provides a feature to combine multiple JavaScript into one request but you must remember that this feature will be available only with .NET Framework 3.5 sp1 or higher versions.  Let’s take a simple example. I am having two javascript files Jscrip1.js and Jscript2.js both are having separate functions. //Jscript1.js function Task1() { alert('task1'); } Here is another one for another file. ////Jscript1.js function Task2() { alert('task2'); } Now I am adding script reference with script manager and using this function in my code like this. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now Let’s test in Firefox with Lori plug-in which will show you how many request are made for this. Here is output of that. You can see 5 Requests are there. Now let’s do same thing in with ASP.NET Script Manager combined script feature. Like following <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </CompositeScript> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now let’s run it and let’s see how many request are there like following. As you can see now we have only 4 request compare to 5 request earlier. So script manager combined multiple script into one request. So if you have lots of javascript files you can save your loading time with this with combining multiple script files into one request. Hope you liked it. Stay tuned for more!!!.. Happy programming.. Technorati Tags: ASP.NET,ScriptManager,Microsoft Ajax

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • Deploy Oracle Management Agent using RPM File

    - by cristiano.toni
    Normal 0 21 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times","serif"; mso-ansi-language:FR;} 1) Create a rpm package on Enterprise Manager 12c a) as Root : # yum install rpmbuild # mkdir /usr/lib/oracle b) as oracle user # cd $<OMS_HOME>/bin/ # emcli get_supported_platforms ----------------------------------------------- Version = 12.1.0.3.0  Platform = Linux x86-64 ----------------------------------------------- Platforms list displayed successfully. #  emcli get_agentimage_rpm -destination=/tmp/agentRPM -platform="Linux x86-64" \ -version=12.1.0.3.0 Platform:Linux x86-64 Destination:/tmp/agentRPM Exalogic:false  Checking for disk space requirements...  === Partition Detail === Space free : 6 GB Space required : 1 GB RPM creation in progress ... Check the logs at /Oracle/gc_inst/em/EMGC_OMS1/sysman/emcli/setup/.emcli/get_agentimage_rpm_date-PM.log Copying agent image from software library to /tmp/agentRPM Setting property ORACLE_HOME to:/Oracle/middleware/oms calling pulloneoffs with arguments:/Oracle/middleware/oms/Oracle/middleware/oms/sysman/agent/ \ 12.1.0.3.0_AgentCore_226.zip12.1.0.3.0Linux x86-64/tmp/agentRPMtrue Agent Image copied successfully... Creation of RPM started... RPM creation successful. Agent image to rpm conversion completed successfully 2) Copy it on all new hosts and install it.  As Root user : c) check and install rpm file # rpm -ivh --test oracle-agt-12.1.0.3.0-1.0.x86_64.rpm  Preparing...                ########################################### [100%] # rpm -ivh oracle-agt-12.1.0.3.0-1.0.x86_64.rpm  Preparing...                ########################################### [100%] Running the prereq    1:oracle-agt             ########################################### [100%] Agent RPM installation is completed successfully. Now to configure the agent follow the below steps: 1. Edit the properties file: /usr/lib/oracle/agent/agent.properties with the correct values 2. Execute the script /etc/init.d/oracle-agt RESPONSE_FILE=/usr/lib/oracle/agent/agent.properties d) create a user for the agent: # useradd -m -d /home/em12adm -s /bin/bash -g dba -G oinstall em12adm # passwd em12adm e) Edit file /usr/lib/oracle/agent/agent.properties # vi /usr/lib/oracle/agent/agent.properties  OMS_HOST=<host_Enterprise_Manager> OMS_PORT=<HTTPS Upload Port > AGENT_REGISTRATION_PASSWORD=oracle AGENT_USERNAME=em12adm AGENT_GROUP=dba ORACLE_HOSTNAME=oraclevm-mgmt # chown -R em12adm:dba /usr/lib/oracle/agent/ Start agent and register the new host server on EM12c   #  /etc/init.d/oracle-agt RESPONSE_FILE=/usr/lib/oracle/agent/agent.properties Now you have registered on EM12C your new target host.

    Read the article

  • accessing a blob ; without using a webrole ?

    - by Egon
    I wanted to knw if there is way we can upload /download a blob; add remove view metadata without using a webrole ? If my application has a lot of gui, shud there be multiple webroles ? everywhere I see webrole's file default.aspx.cs has everything to do with the blob based on a event ; which is perfectly fine, but what if my gui is more complicated ?

    Read the article

  • Open file - Security warning

    - by joker
    Does anyone know how to disable the unknown publisher security warning when running an application in Windows Xp Home? It's pretty annoying to have to click run everytime... I have tried: Run gpedit.msc, and go to Local Computer Policy-User Configuration-Administrative Templates-Windows Components-Attachment Manager and enable "Default risk level for file attachments", and then enable "Inclusion list for low risk file types" and add to this list the file extensions that you want to open without triggering this crap. But this file 'gpedit.msc' doest not exist on my computer, i checked system32 folder also =/ maybe its for xp pro

    Read the article

  • OperationalError "unable to open database file" processing query results with SQLAlchemy and SQLite3

    - by Peter
    I'm running into this little problem that I hope is just a dumb user error. It looks like some sort of a size limit with a query to a SQLite database. I managed to reproduce the issue with an in-memory DB and a simple script shown below. I can make it work by either reducing the number of records in the DB; or by reducing the size of each record; or by dropping the order_by() call. I am using Python 2.5.5 and SQLAlchemy 0.6.0 in a Cygwin environment. Thanks! #!/usr/bin/python from sqlalchemy.orm import sessionmaker import sqlalchemy import sqlalchemy.orm class Person(object): def __init__(self, name): self.name = name engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sessionmaker(bind=engine) metadata = sqlalchemy.schema.MetaData(bind=engine) person_table = sqlalchemy.Table('person', metadata, sqlalchemy.Column('id', sqlalchemy.types.Integer, primary_key=True), sqlalchemy.Column('name', sqlalchemy.types.String)) metadata.create_all(engine) sqlalchemy.orm.mapper(Person, person_table) session = Session() session.add_all([Person("012345678901234567890123456789012") for i in range(5000)]) session.commit() persons = session.query(Person).order_by(Person.name).all() print "count =", len(persons) session.close() The all() call to the query result fails with the OperationalError exception: Traceback (most recent call last): File "./stress.py", line 27, in <module> persons = session.query(Person).order_by(Person.name).all() File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1343, in all return list(self) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1451, in __iter__ return self._execute_and_instances(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1456, in _execute_and_instances mapper=self._mapper_zero_or_none()) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py", line 737, in execute clause, params or {}) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1109, in execute return Connection.executors[c](self, object, multiparams, params) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1186, in _execute_clauseelement return self.__execute_context(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1215, in __execute_context context.parameters[0], context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1284, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1282, in _cursor_execute self.dialect.do_execute(cursor, statement, parameters, context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file u'SELECT person.id AS person_id, person.name AS person_name \nFROM person ORDER BY person.name' ()

    Read the article

  • Redirect batch stderr to file

    - by nivlam
    I have a batch file that executes a java application. I'm trying to modify it so that whenever an exception occurs, it'll write the STDERR out to a file. It looks something like this: start java something.jar method %1 %2 2>> log.txt Is there a way I can write the arguments %1 and %2 to the log.txt file as well? I don't want to write it to the log file everytime this batch file gets called, only when an exception occurs. I tried searching for a way to redirect STDERR into a variable, but I couldn't figure it out. Ideally I'd like the log file to look something like: Batch file called with parameters: - "first arg" - "second arg" Exception: java.io.exception etc... ------------------------------------ Batch file called with parameters: - "first arg" - "second arg" Exception: java.io.exception etc...

    Read the article

  • .exe File becomes corrupted when downloaded from server

    - by Kerri
    Firstly: I'm a lowly web designer who knows just enough PHP to be dangerous and just enough about server administration to be, well, nothing. I probably won't understand you unless you're very clear! The setup: I've set up a website where the client uploads files to a specific directory, and those files are made available, through php, for download by users. The files are generally executable files over 50MB. The client does not want them zipped, as they feel their users aren't savvy enough to unzip them. I'm using the php below to force a download dialogue box and hide the directory where the files are located. It's Linux server, if that makes a difference. The problem: There is a certain file that becomes corrupt after the user tries to download it. It is an executable file, but when it's clicked on, a blank DOS window opens up. The original file, prior to download opens perfectly. There are several other similar files that go through the same exact download procedure, and all of those work just fine. Things I've tried: I've tried uploading the file zipped, then unzipping it on the server to make sure it wasn't becoming corrupt during upload, and no luck. I've also compared the binary code of the original file to the downloaded file that doesn't work, and their exactly the same (so the php isn't accidentally inserting anything extra into the file). Could it be an issue with the headers in my downloadFile function? I really am not sure how to troubleshoot this one… This is the download php, if it's relevant ($filenamereplace is defined elsewhere): downloadFile("../DOWNLOADS/files/$filenamereplace","$filenamereplace"); function downloadFile($file,$filename){ if(file_exists($file)) { header('Content-Description: File Transfer'); header('Content-Type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); header('Content-Transfer-Encoding: binary'); header('Expires: 0'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); header('Content-Length: ' . filesize($file)); @ flush(); readfile($file); exit; } }

    Read the article

  • Deleting a single file from the trash in Mac OS X Snow Leopard

    - by SteveTheOcean
    In earlier versions of Mac OS X one could delete a file from the trash by opening a terminal window and typing rm ~/.Trash/file_i_want_to_delete. See this previous post. Unlike earlier versions in Mac OS X Snow Leopard one can "put back" a file from the trash into its original directory. Will the rm trick still work? Testing shows it does delete the file but what happens to the "put back" information that specifies the directory from which the file was deleted?

    Read the article

  • Where can I find a full distribution of FAR Manager (plugins bundle)?

    - by Sorin Sbarnea
    FAR Manager is a very powerful open-source file manager for Windows. Most of its qualities are coming from the plugins but the official distribution is missing them. I'm looking for an updated bundle that should contain the most important FAR Manager plugins: WinSCP - SCP/FTP file transfer Colorer - syntax highlighting 7-Zip - support to archive in Zip and 7-Zip and unarchive almost any format Picture View 2 - View pictures using F3, no external viewer needed. ConEMU - for additional console support including resize support.

    Read the article

  • fedora tomcat log file path

    - by Kamil
    My log file is inside: kamil@localhost tomcat$ grep "logs/" ./* ./log4j.properties:log4j.appender.R.File=${catalina.home}/logs/tomcat.log my CATALINA_HOME is kamil@localhost tomcat$ sudo grep "CATALINA" ./* ... ./tomcat.conf:CATALINA_HOME="/usr/share/tomcat" that above suggests that my log file is hare, and there it's: kamil@localhost tomcat$ sudo ls /usr/share/tomcat/logs/ | grep .out catalina.out So why can't I start server: kamil@localhost tomcat$ sudo tomcat start /usr/sbin/tomcat: line 30: /logs/catalina.out: No such file or directory

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >