Search Results

Search found 4442 results on 178 pages for 'html5 filesystem'.

Page 145/178 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • FileInputStream for a generic file System

    - by Akhil
    I have a file that contains java serialized objects like "Vector". I have stored this file over Hadoop Distributed File System(HDFS). Now I intend to read this file (using method readObject) in one of the map task. I suppose FileInputStream in = new FileInputStream("hdfs/path/to/file"); wont' work as the file is stored over HDFS. So I thought of using org.apache.hadoop.fs.FileSystem class. But Unfortunately it does not have any method that returns FileInputStream. All it has is a method that returns FSDataInputStream but I want a inputstream that can read serialized java objects like vector from a file rather than just primitive data types that FSDataInputStream would do. Please help!

    Read the article

  • Web Application Nat Traversal

    - by pbreault
    We are deploying web applications in java using tomcat on client machines across the country. Once they are installed, we want to allow a remote access to these web applications through a central server, but we do not want our clients to have to open ports on their routers. Is there a way to tunnel the http traffic in a way that the central server can access the web application that is behind the firewall ? The central server has a static ip address and we have full control over it. We don't need to access the filesystem, we only want to access the web application itself through a browser.

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • Is there a tool for detecting Visual Studio projects with duplicate GUIDs?

    - by sharptooth
    When creating new Visual Studio C++ projects there're two ways: either run the wizard and then painfully change all the necessary settings in the project or just copy and existing project, rename everything there and add files into it. The second variant is great except that the .vcproj file stores a project GUID. This GUID is used to track project dependencies and the startup project when two or more projects are in one solution. If any two projects in one solution have identical GUIDs problems can arise - dependencies are lost and the startup prject is reset on next solution reload. Clearly there's a need for a tool that would scan the filesystem subtree and detect projects with identical GUIDs here. Before I start writing one ... is there a ready tool for that?

    Read the article

  • What application domains are CPU bound and will tend to benefit from multi-core technologies?

    - by Glomek
    I hear a lot of people talking about the revolution that is coming in programming due to multi-core processors and parallelism, but I can't shake the feeling that for most of us, CPU cycles aren't the bottleneck. Pretty much all of my programs have been I/O bound in one way or another (database, filesystem, network, user interaction, etc.) for a very long time. Now I can think of a few areas where CPU cycles are a limiting factor, like code breaking, graphics, sound, some forms of simulation (weather, physics, etc.), and some forms of mathematical research, but they all seem like fairly specialized application domains. My general impression is that most programs are still I/O bound and that for most of our industry CPUs have been plenty fast for quite a while now. Am I off my rocker? What other application domains are CPU bound today? Do any of them include a large portion of the programming population? In essence, I'm wondering whether the multi-core CPUs will impact very many of us, and if so, how?

    Read the article

  • Netlink user-space and kernel-space communication

    - by sasayins
    Hi, I am learning programming in embedded systems using Linux as my main platform. And I want to create a Device Event Management Service. This service is a user-space application/daemon that will detect if a connected hardware module triggered an event. But my problem is I don't know where should I start. I read about Netlink implementation for userspace-kernelspace communication and it seems its a good idea but not sure if it is the best solution. But I read that the UDEV device manager uses Netlink to wait a "uevent" from the kernel space but it is not clear for me how to do that. I read about polling sysfs but it seems it is not a good idea to poll filesystem. What do you think the implementation that should I use in my service? Should I use netlink(hard/no clue how to) or just polling the sysfs(not sure if it works)? Thanks

    Read the article

  • How to easily pass a very long string to a worker process under Windows?

    - by sharptooth
    My native C++ Win32 program spawns a worker process and needs to pass a huge configuration string to it. Currently it just passes the string as a command line to CreateProcess(). The problem is the string is getting longer and now it doesn't fit into the 32K characters limitation imposed by Windows. Of course I could do something like complicating the worker process start - I use the RPC server in it anyway and I could introduce an RPC request for passing the configuration string, but this will require a lot of changes and make the solution not so reliable. Saving the data into a file for passing is also not very elegant - the file could be left on the filesystem and become garbage. What other simple ways are there for passing long strings to a worker process started by my program on Windows?

    Read the article

  • Can FileOutputStream() take a relative path as an argument

    - by Ankur
    I am creating a FileOutputStream object. It takes a file or String as an argument in its constructor. My question is, can I give it a relative URL as an argument for the location of a file, it doesn't seem to work, but I am trying to work out if this is possible at all (if not I will stop trying). If it is not possible, how can I (from a servlet) get the absolute path (on the filesystem, not the logical URL) to the current location in such a way that I can pass that to the constructor. Part of my problem is that my dev box is Windows but I will publish this to a Unix box, so the paths cannot be the same i.e. on Windows C:/.... and on unix /usr/...

    Read the article

  • Using ssh for remote command

    - by user1663479
    I need to use ssh to execute a remote command such as: ssh -l jsilva xman /vol/2011/linux_x64/exe/mx201111.exe When I execute ssh I receive error message: /cmg/2011.11/linux_x64/exe/mx201111.exe: error while loading shared libraries: libmkl_intel_lp64.so: cannot open shared object file: No such file or directory This application uses the variable LD_LIBRARY_PATH. I inserted this variable into /etc/profiles in localhost and remote host. The filesystem /cmg is mounted by autofs for both hosts (local and remote). Anybody have idea how to resolve this problem? Thanks!

    Read the article

  • Linux - How do i know the block map of the given file and/or the free space map of the partition.

    - by Inso Reiges
    Hello, I am on Linux and need to know either of the two things: 1) If i have a regular file on some file system on a partition under Linux is there a way to know the set of the physical blocks that this file occupies on the drive from user space? Or at least the set of the file system's clusters? 2) Is there a way to get the same information about the whole free space of the given file system? In both cases i understand that if there is any possible way to extract this info it will probably be totally unsafe and racy (anything could happen to these set of blocks between the time i see them and act on them somehow). I also really don't want an implementation that will have to know a lot about every filesystem.

    Read the article

  • Measuring CPU time per-thread on Windows

    - by Eli Courtwright
    I'm developing a long-running multi-threaded Python application for Windows, and I want the process to know the CPU time that each of its threads has taken. I can get the overall times for the entire process with os.times() but I need to know the per-thread times. I know that there are external tools such as the Sysinternals Process Explorer, but my program itself needs to have this information. If I were on Linux, I look in the /proc filesystem, as described here. If I were writing C code, I'd use the GetThreadTimes call, as described here. So how can I accomplish this on Windows using Python?

    Read the article

  • HSQLDB and in-memory files

    - by lewap
    Is it possible to setup HSQLDB in a way, so that the files with the db information are written into memory instead of using actual files? I want to use hsqldb to export some data structures together with hibernate mappings. Is is, however, not possible to write temporary files, so that I need to generate the files in-memory and return a stream with their contents as a response. Setting hsqldb to use nio seems not to be a solution, because there is no way to get hold of those files before they get written onto the filesystem. What I'm thinking of is a protocol handler for hsqldb, but I didn't find a suitable solution yet. Just to describe in other words: A hack solution would be to pass hsqldb a stream or several streams. It would then during its operation write data into those streams. After all data is written, the user of the db could then use those streams to send it back over the network.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • Calling an svn update from a php script via a browser is not working

    - by hbt
    Hey guys, I have two scripts. running an update and calling shell_exec('svn update') and shell_exec('svn st') running a mysqldump shell_exec('mysqldump params') The svn script is not running the update command, the svn st is printing results but not the svn update I tried to declare parameters when calling svn update eg 'svn update ' . dir . ' --username myuser --password mypasswd --non-interactive'; -- still nothing Played with most of the params If this is something related to binaries/permissions/groups, I don't see it. The mysqldump command works fine and is producing a file, so why isn't the svn updating the filesystem? Please do not advise using core SVN classes in PHP. This is not an option, I don't have complete control over the server and the module is not available. Thanks for your help, -hbt PS: important thing to mention here. The scripts works when called via the command line. It only fails when called via a web browser.

    Read the article

  • Configuring Hadoop logging to avoid too many log files

    - by Eric Wendelin
    I'm having a problem with Hadoop producing too many log files in $HADOOP_LOG_DIR/userlogs (the Ext3 filesystem allows only 32000 subdirectories) which looks like the same problem in this question: http://stackoverflow.com/questions/2091287/error-in-hadoop-mapreduce My question is: does anyone know how to configure Hadoop to roll the log dir or otherwise prevent this? I'm trying to avoid just setting the "mapred.userlog.retain.hours" and/or "mapred.userlog.limit.kb" properties because I want to actually keep the log files. I was also hoping to configure this in log4j.properties, but looking at the Hadoop 0.20.2 source, it writes directly to logfiles instead of actually using log4j. Perhaps I don't understand how it's using log4j fully. Any suggestions or clarifications would be greatly appreciated.

    Read the article

  • mounting without -o loop

    - by jumpinjoe
    Hi, I have written a dummy (ram disk) block device driver for linux kernel. When the driver is loaded, I can see it as /dev/mybd. I can successfully transfer data onto it using dd command, compare the copied data successfully. The problem is that when I create ext2/3 filesystem on it, I have to use -o loop option with the mount command. Otherwise mount fails with following result: mount: wrong fs type, bad option, bad superblock on mybd, missing codepage or helper program, or other error What could be the problem? Please help. Thanks.

    Read the article

  • Resize the /var directory in redhat enterprise edition 4

    - by Sri
    I am running NDB mysql. the log files fills up the /var directory. therefore i cant start the ndbd service now. as a temporary fix, i have deleted the log files and again working fine. but again the log files fill up the /var directory. i got plenty of space in other partition. therefore i would like to swap the partition from one directory to /var. here if my input from df -h Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 ext3 54G 2.9G 49G 6% / /dev/cciss/c0d0p1 ext3 99M 14M 81M 14% /boot none tmpfs 1013M 0 1013M 0% /dev/shm /dev/cciss/c0d0p2 ext3 9.7G 9.7G 0 100% /var there are plenty of space in /dev/mapper/VolGroup00-LogVol00. Therefore i will like to swap 10 G space from this directory to /var. could you please help me out to solve this problem?

    Read the article

  • Mac OSX: Passing a link to file from user process to kernel module.

    - by Inso Reiges
    Hello, I need to pass a link to file from a user process to the OSX kernel driver. By link i mean anything that uniquely identifies a file on the local filesystem. I need that link to do I/O on that file in kernel. The most obvious solution seems to pass a file name and use a VFS vnode lookup. However i noticed, that Apple Disk Images helper process passes a raw data array for image-path property to driver when attaching a disk image file: <2f 56 6f 6c 75 6d 65 73 2f 73 74 6f 72 61 67 65 2f 74 65 73 74 32 2e 64 6d 67> What is that diskimages-helper passes to the kernel driver? Some serialized type perhaps? If yes, what type is it and how can i use it?

    Read the article

  • How to sanely read and dump structs to disk when some fields are pointers?

    - by bp
    Hello, I'm writing a FUSE plugin in C. I'm keeping track of data structures in the filesystem through structs like: typedef struct { block_number_t inode; filename_t filename; //char[SOME_SIZE] some_other_field_t other_field; } fs_directory_table_item_t; Obviously, I have to read (write) these structs from (to) disk at some point. I could treat the struct as a sequence of bytes and do something like this: read(disk_fd, directory_table_item, sizeof(fs_directory_table_item_t)); ...except that cannot possibly work as filename is actually a pointer to the char array. I'd really like to avoid having to write code like: read(disk_df, *directory_table_item.inode, sizeof(block_number_t)); read(disk_df, directory_table_item.filename, sizeof(filename_t)); read(disk_df, *directory_table_item.other_field, sizeof(some_other_field_t)); ...for each struct in the code, because I'd have to replicate code and changes in no less than three different places (definition, reading, writing). Any DRYer but still maintainable ideas?

    Read the article

  • How to prevent UI from freezing during lengthy process?

    - by OverTheRainbow
    Hello, I need to write a VB.Net 2008 applet to go through all the fixed-drives looking for some files. If I put the code in ButtonClick(), the UI freezes until the code is done: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click 'TODO Find way to avoid freezing UI while scanning fixed drives Dim drive As DriveInfo Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String For Each drive In DriveInfo.GetDrives() If drive.DriveType = DriveType.Fixed Then filelist = My.Computer.FileSystem.GetFiles(drive.ToString, FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") For Each filepath In filelist 'Do stuff Next filepath End If Next drive End Sub Google returned information on a BackGroundWorker control: Is this the right/way to solve this issue? If not, what solution would you recommend, possibly with a really simple example? FWIW, I read that Application.DoEvents() is a left-over from VBClassic and should be avoided. Thank you.

    Read the article

  • Mac OSX: Passing a file from user process to kernel module.

    - by Inso Reiges
    Hello, I need to pass a link to file from a user process to the OSX kernel driver. By link i mean anything that uniquely identifies a file on the local filesystem. I need that link to do I/O on that file in kernel. The most obvious solution seems to pass a file name and use a VFS vnode lookup. However i noticed, that Apple Disk Images helper process passes a raw data array for image-path property to driver when attaching a disk image file: <2f 56 6f 6c 75 6d 65 73 2f 73 74 6f 72 61 67 65 2f 74 65 73 74 32 2e 64 6d 67> What is that diskimages-helper passes to the kernel driver? Some serialized type perhaps? If yes, what type is it and how can i use it?

    Read the article

  • How is it possible the class inheritance in namespaces using Ruby on Rails 3?

    - by user502052
    In my RoR3 application I have a namespace called NS1 so that I have this filesystem structure: ROOT_RAILS/controllers/ ROOT_RAILS/controllers/application_controller.rb ROOT_RAILS/controllers/ns/ ROOT_RAILS/controllers/ns/ns_controller.rb ROOT_RAILS/controllers/ns/names_controller.rb ROOT_RAILS/controllers/ns/surnames_controller.rb I wuold like that 'ns_controller.rb' inherits from application controller, so in 'ns_controller.rb' file I have: class Ns::NsController < ApplicationController ... end Is this the right approach? Anyway if I have this situation... in 'application_controller.rb' class ApplicationController < ActionController::Base @profile = Profile.find(1) end in 'ns_controller.rb' class Ns::NsController < ApplicationController @name = @profile.name @surname = @profile.surname end ... '@name' and '@surname' variables are not set. Why?

    Read the article

  • why does $.ajax(..) not work for me?

    - by dr jerry
    I'm running jquery from a file. And I'm trying to load a svg file from my localhost to populate a svg canvas. However that does not work as expected. What I do from filesystem: $.ajax({ url: url , timeout: 1000, complete: function(xml) { alert('complete'); }, success: function(xml, status, xreq) { alert('success'); }, error: function() { alert('error'); } }); the url reads: http://localhost/image.svg, when I read this url directly from an addressbar from the browser, the pages remains white but the pagesource displays the source of image.svg. Debugging the $.ajax code above, reveals that the success: method is hit, but xml response remains empty. Any help is greatly appreciated. regards, jeroen.

    Read the article

  • Programatically find out a file type by looking its binary content. Possible?

    - by daemonkid
    I have a c# component that will recieve a file of the following types .doc, .pdf, .xls, .rtf These will be sent by the calling siebel legacy app as a filestream. So... [LegacyApp] {Binary file stream} [Component] The legacy app is a black box that cant be modified to tell the component what file type (doc,pdf,xls) it is sending. The component needs to read this binary stream and create a file on the filesystem with the right extension. Any ideas? Thanks for your time.

    Read the article

  • How do I read and traverse inodes

    - by Eric Fossum
    I've opened the super-block and group descriptor in an EXT2 filesystem, but I don't know how to read for instance the root directory or files in it... Here's some of what i got fd=open("/dev/sdb2", O_RDONLY); lseek(fd, SuperSize, SEEK_SET); read(fd, &super_block, SuperSize); lseek(fd, 4096, SEEK_SET); read(fd, &groupDesc, DescriptSize); but this next part doesn't seem to work... lseek(fd, super_block.s_log_block_size*groupDesc.bg_inode_table, SEEK_SET); lseek(fd, InodeSize*(EXT2_ROOT_INO-1), SEEK_CUR); read(fd, &root, InodeSize);

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >