Search Results

Search found 306 results on 13 pages for 'the daemons advocate'.

Page 2/13 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What is the secure way to isolate ftp server users on unix?

    - by djs
    I've read documentation for various ftp daemons and various long threads about the security implications of using a chroot environment for an ftp server when giving users write access. If you read the vsftpd documentation, in particular, it implies that using chroot_local_user is a security hazard, while not using it is not. There seems to be no coverage of the implications of allowing the user access to the entire filesystem (as permitted by their user and group membership), nor to the confusion this can create. So, I'd like to understand what is the correct method to use in practice. Should an ftp server with authenticated write-access users provide a non-chroot environment, a chroot environment, or some other option? Given that Windows ftp daemons don't have the option to use chroot, they need to implement isolation otherwise. Do any unix ftp daemons do something similar?

    Read the article

  • Performance impact of Zones.

    - by nospam(at)example.com (Joerg Moellenkamp)
    I was really astonished when i saw this question. Because this question was a old acquaintance from years ago, that i didn't heard for a long time. However there was it again. The question: "What's the overhead of Zones?". Sun was and Oracle is not saying "zero". We saying saying minimal. However during all the performance analysis gigs on customer systems i made since the introduction of Zones i failed to measure any overhead caused by zones. What i saw however, was additional load intoduced by processes that wouldn't be there when you would use only one zone Like additional monitoring daemons, like additional daemons having a controlling or supervising job for the application that resulted in slighly longer runtimes of processes, because such additional daemons wanted some cycles on the CPU as well. So i ask when someone wants to tell me that he measured a slight slowdown, if he or she has really measured the impact of the virtualization layer or of a side effect described above. It seems to be a little bit hard to believe, that a virtualisation technology has no overhead, however keep in mind that there is no hypervisor and just one kernel running that looks and behaves like many operating system instances to apps and users. While this imposes some limits to the technology (because there is just one kernel running you can't have zones with different kernels versions running ... obvious even to the cursory observer), but that is key to it's lightweightness and thus to the low overhead. Continue reading "Performance impact of Zones."

    Read the article

  • WebDav rename fails on an Apache mod_dav install behind NginX

    - by The Daemons Advocate
    I'm trying to solve a problem with renaming files over WebDav. Our stack consists of a single machine, serving content through Nginx, Varnish and Apache. When you try to rename a file, the operation fails with the stack that we're currently using. To connect to WebDav, a client program must: Connect over https://host:443 to NginX NginX unwraps and forwards the request to a Varnish server on http://localhost:81 Varnish forwards the request to Apache on http://localhost:82, which offers a session via mod_dav Here's an example of a failed rename: $ cadaver https://webdav.domain/ Authentication required for Webdav on server `webdav.domain': Username: user Password: dav:/> cd sandbox dav:/sandbox/> mkdir test Creating `test': succeeded. dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 dav:/sandbox/> move test newtest Moving `/sandbox/test' to `/sandbox/newtest': redirect to http://webdav.domain/sandbox/test/ dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 For more feedback, the WebDrive windows client logged an error 502 (Bad Gateway) and 303 (?) on the rename operation. The extended logs gave this information: Destination URI refers to different scheme or port (https://hostname:443) (want: http://hostname:82). Some other Restrictions: Investigations into NginX's Webdav modules show that it doesn't really fit our needs, and forwarding webdav traffic to Apache isn't an option because we don't want to enable Apache SSL. Are there any ways to trick mod_dav to forward to another host? I'm open to ideas :).

    Read the article

  • Remote Desktop Connection Only Works One Way

    - by advocate
    I can't get my desktop to connect to my laptop through remote desktop connection. Unfortunately I can only get my laptop to connect to my desktop (quite useless). Desktop: Windows 7 Ultimate 64 Bit SP1 Windows firewall is off for all 3 profiles (domain / private / public) Remote desktop connection is installed and set to allow all connections Under running services is: Running Remote Desktop Configuration Running Remote Desktop Services Running Remote Desktop Services UserMode Port Redirector Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) Laptop: Windows 7 Home Premium 64 Bit SP1 Windows firewall is off for all3 profiles (domain / private / public) Remote desktop connection is installed and set to 'Allow Remote Assistance connections to this computer' Under running services is: Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Desktop Configuration Stopped Remote Desktop Services Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) It should be noted that the Laptop that I'm trying to connect to is an Alienware and might be running some wonky Dell settings. Also, the settings are slightly different for remote desktop connection as it's a Home edition of Windows and not Ultimate like my desktop. Finally, both computers are on the same Homegroup so that RDC can be accessed by one click through the network section of Windows. They're also on the same workgroup, MSHOME, just to see if that helps.

    Read the article

  • Virtualbox Headless Server on Ubuntu missing VRDP Options

    - by The Daemons Advocate
    I'm running VirtualBox headless server on an Ubuntu 64 bit host, and I want to use it remotely. However, I'm having problems connecting via RDP. The DNS names in my network show the host to be 'server', and the guest to be 'ubuntu-vm'. From the official documentation, I gather that I am to connect to server on the default RDP port in order to see the guest machine. I start the virtual machine like so: vboxheadless -startvm My_VM Then I connect on my laptop, and I get... rdesktop -a 16 server ERROR: server: unable to connect So next I consult the documentation further, and I find there are RDP flags that can be turned on (but should be on implicitly for a headless server). So I pull up information using 'vboxmanage showvminfo My_VM', and I find the VRDP property is off. VRDP Connection: not active To make things even weirder, RDP flag seems to be missing from vboxmanage. I've installed straight from the ubuntu repo's using the virutalbox-ose package, not sure how that measures up against the official docs. For instance, this command doesn't exist: VBoxManage modifyvm My_VM --vrdp on From the UI, the VM's Settings regarding Display have greyed out the 'remote Display' option. What I'm looking for is advice :). I'm open to suggestions that don't involve starting again with something like VMWare. Thanks in advance!

    Read the article

  • Bridging Wireless and Wired Interfaces in Linux

    - by The Daemons Advocate
    My network setup is something like: Wireless Router <---> Netbook <---> Ubuntu Desktop ...or, more verbosely (with interfaces): Wireless Router <--(wireless)--> (eth2) Ubuntu Netbook Ubuntu Netbook (eth0) <---(wired)----> (eth0) Ubuntu Desktop In a perfect world, I'd have the desktop wired, but weird circumstances combined with my wanting to understand more about networking in linux make me want to figure out how to bridge these two devices. A bit of googling has given me this example using bridge-utils, and here's how I'm (failing) to setup the bridge (on the netbook): sudo -i ifconfig eth0 0.0.0.0 ifconfig eth2 0.0.0.0 brctl addbr bridget brctl addif bridget eth0 brctl addif bridget eth2 ifconfig bridget up ...then, trying to make sure that the netbook can still get on the internets... route add default gateway 192.168.2.1 dhclient bridget What happens after this is that the dhclient command above (netbook) doesn't get served an IP, and the Desktop, if I run dhclient, it doesn't get served an IP. Some weird considerations might be that I'm running the Network Manager Applet that comes with Ubuntu. While I'm sure I can get a command line wireless configuration setup, it's a bit complex. Can someone give me a shout as to where I'm going wrong? I'd also like to note another related question titled 'Bridging my laptop’s wireless and wired adaptors', however the setup is different to mine.

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • Run serveral daemon using python

    - by ylc
    I noticed that serveral daemon invoked python seperately. For example, I have both wicd and ibus daemon running on my machine. Instead of launching a single instance of python, the daemons run with two python instance at the same time in htop: /usr/bin/python2 -O /usr/share/wicd/daemon/monitor.py python2 /usr/share/ibus/ui/gtk/main.py Is it a waste of doing that? If yes, how can I improve this? If no, why avoid putting all daemons run on a single python instance?

    Read the article

  • Kill named running screen with -X only works after reattached

    - by oversize
    Hello I am using ubuntu 8.04.4 and would like to start daemons like this: screen -dmS SESSIONNAME script.sh Then i want to kill these screens with -X like so screen -S SESSIONAME -X kill But, this does not work. Only if i attach and detach that session it gets kill'ed with above command. What am i doing wrong? I would like to not have to attach/deattach the session to kill it since i want to use fabric scripts that start/stop daemons remotly. - Thank you

    Read the article

  • Installing Epic (Eclipse Plugin) in Pulse Explorer

    - by The Daemons Advocate
    I'm trying to install EPIC using the Pulse Explorer for Eclipse (as I'm rather fond of sharing profiles :). When I go to install the plugin under my account, I get asked for a login into http://e-p-i-c.sf.net. However, the Epic's team documentation doesn't mention anything about a login. Here's what I've done: Gone into Pulse and created a new profile based on Eclipse Classic. Navigated to Software, added the EPIC software site to list of public sites, and chosen to install it. Added Pulse item to profile. Run the installer. The error shows up while it's all downloading/installing. Login boxes start to appear for epic related components, and I don't have credentials to put in so all I can do is hit cancel. If I hit cancel, the process fails at the end with the generic error message: "an unexpected error occurred preparing to install and/or launch the selected profile". Bundles that are failing to download are: org.epic.debug org.epic.doc org.epic.lib org.epic.perleditor org.epic.regxp org.epic.source The component that's exploding is called: org.eclipse.equinox.internal.p2.repository.Credentials$LoginCancelledException I've had the same effect on Pulse 0.5.x and 0.6.x. No clue where to go from here. Might contact the EPIC and Pulse teams and ask them, but thought that I'd get a better response from here. I'm somewhat sure I'm doing something wrong.

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • Use Delayed::Job to manage multiple job queues

    - by Alex
    I want to use Delayed::Job (or perhaps a more appropriate job queue to my problem) to dispatch jobs to multiple background daemons. I have several background daemons that carry out different responsibilities. Each one is interested in different jobs in the queue from the Rails app. Is this possible using Delayed::Job, or perhaps there is a different job queue that better fits this task?

    Read the article

  • Why would one build supervisord inside of a buildout?

    - by chiggsy
    I've seen buildout recipes that build supervisor into the buildout, I suppose to control the daemons inside. However, it seems to me that one would still need something in /etc/init.d ( for example ) to run said supervisor instance on boot. So, why build supervisor inside the buildout? Why not install it system wide and just make a config file for the daemons involved inside?

    Read the article

  • Execute script with Ruby on Rails?

    - by yuval
    I want to start my daemon with my application. In the command line, I can write something like lib/daemons/mydaemon_ctl start to start up my daemon, but I have to do this manually. I want the daemon to start when I start my server (i.e. when the initializer files are loaded). Is there a ruby command for executing a command line? Something like exec "lib/daemons/mydaemon_ctl start"? Thanks!

    Read the article

  • How complex of a daemon should be run through inetd?

    - by amphetamachine
    What is the general rule for which daemons should be started up through inetd? Currently, on my server, sshd, apache and sendmail are set up to run all the time, where simple *NIX services are set up to be started by inetd. I'm the only one who uses ssh on my computer, and break-in attempts aren't a problem because I have it running on a non-standard port, and my HTTP server gets maybe 5 hits a day that aren't GoogleBot. My question is, what are the benefits vs. the performance hits associated with running a complex daemon like sshd or apache through superserver, and what, if any successes or failures have you had running your own daemons in this manner?

    Read the article

  • Imap server woes with Android Gingerbread email and Thunderbird

    - by Mojo
    I run my own mail server and use UW's imapd/popd daemons to provide service. This week I just upgraded my OG Droid to a new Droid 3, running Android 2.3.4 (Gingerbread). The email client is much improved over the previous one. But now I have a bad interaction when I try to access email using imap from Thunderbird on a laptop or desktop. Frequently Thunderbird will stop receiving any email at all, and it will appear only on the Droid. Sometimes a Thunderbird restart will make the mail appear, but none of my "deletes" will be recorded, so when I start Thunderbird again, all my old email reappears. If I kill all of the open imap daemons and restart xinetd, I can force it to behave for maybe a session. I've tried turning off IDLE service (push email) on both sides, to no apparent avail. I've also tried installing DroidMail with the same result.

    Read the article

  • systemctl (Fedora 17) and interacting spawned processes's consoles

    - by Sean
    Introduction I've recently upgraded to Fedora 17 and I'm getting used to the newer systemctl daemon manager versus shell init scripts. A feature I need on some of my daemons is the ability to interact with their consoles because unclean shutdowns not initiated by the process itself can cause database corruption. So, performing a systemctl stop service-name.service for example might cause irreversible data loss. These consoles read user input through stdin or similar methods, so what I've been doing on my old OS is to place those daemons foregrounded in a screen session, and I suspended that screen session with ^A ^z. It's also worth noting that I've now made systemctl do this automatically if the computer reboots, but it still doesn't solve my potential data corruption problem I'm trying to avoid. My Question Is there a way to use systemctl in order to directly interact with the console of processes it spawns? Can I hook a process through systemctl to get access to its console? Thanks You guys always give great answers, so I'm turning to you!

    Read the article

  • Memcached clustered alternative

    - by Johan Kooijman
    I'm looking to replace memcached. We have a LOT of traffic to our central memcached node which I'd like to split. There's only so much trunking networks I can do. My general idea is to install a memcached-type daemon on every webserver and have the daemons replicate set/delete/updates over all the daemons, so that each webserver connects to a socket or on localhost. All data should be available on all nodes. The alternatives: - repcached (max 2 masters) - redis (single master) - couchdb/mongodb/handlersocket - persistent data on disk, I'd like to remove the disk part to gain more performance. Any hints?

    Read the article

  • Macs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    - by Eric Z Goodnight
    Chances are you have at least one “creative” friend who’s a Mac advocate. Ever wondered how Apple got a reputation as the “creative company,” or why artists are so drawn to them? Surely, computers can’t make you creative, can they? Maybe you’re an avid Mac Hater, or maybe you’re an Apple advocate—chances are you’ve heard of this myth and wonder why people all seem to think this way. Take a look through the history of Apple, and see why Macintosh has become so synonymous with desktop publishing, photography, creativity, and design industries. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • Change Comes from Within

    - by John K. Hines
    I am in the midst of witnessing a variety of teams moving away from Scrum. Some of them are doing things like replacing Scrum terms with more commonly understood terminology. Mainly they have gone back to using industry standard terms and more traditional processes like the RAPID decision making process. For example: Scrum Master becomes Project Lead. Scrum Team becomes Project Team. Product Owner becomes Stakeholders. I'm actually quite sad to see this happening, but I understand that Scrum is a radical change for most organizations. Teams are slowly but surely moving away from Scrum to a process that non-software engineers can understand and follow. Some could never secure the education or personnel (like a Product Owner) to get the whole team engaged. And many people with decision-making authority do not see the value in Scrum besides task planning and tracking. You see, Scrum cannot be mandated. No one can force a team to be Agile, collaborate, continuously improve, and self-reflect. Agile adoptions must start from a position of mutual trust and willingness to change. And most software teams aren't like that. Here is my personal epiphany from over a year of attempting to promote Agile on a small development team: The desire to embrace Agile methodologies must come from each and every member of the team. If this desire does not exist - if the team is satisfied with its current process, if the team is not motivated to improve, or if the team is afraid of change - the actual demonstration of all the benefits prescribed by Agile and Scrum will take years. I've read some blog posts lately that criticise Scrum for demanding "Big Change Up Front." One's opinion of software methodologies boils down to one's perspective. If you see modern software development as successful, you will advocate for small, incremental changes to how it is done. If you see it as broken, you'll be much more motivated to take risks and try something different. So my question to you is this - is modern software development healthy or in need of dramatic improvement? I can tell you from personal experience that any project that requires exploration, planning, development, stabilisation, and deployment is hard. Trying to make that process better with only a slightly modified approach is a mistake. You will become completely dependent upon the skillset of your team (the only variable you can change). But the difficulty of planned work isn't one of skill. It isn't until you solve the fundamental challenges of communication, collaboration, quality, and efficiency that skill even comes into play. So I advocate for Big Change Up Front. And I advocate for it to happen often until those involved can say, from experience, that it is no longer needed. I hope every engineer has the opportunity to see the benefits of Agile and Scrum on a highly functional team. I'll close with more key learnings that can help with a Scrum adoption: Your leaders must understand Scrum. They must understand software development, its inherent difficulties, and how Scrum helps. If you attempt to adopt Scrum before the understanding is there, your leaders will apply traditional solutions to your problems - often creating more problems. Success should be measured by quality, not revenue. Namely, the value of software to an organization is the revenue it generates minus ongoing support costs. You should identify quality-based metrics that show the effect Agile techniques have on your software. Motivation is everything. I finally understand why so many Agile advocates say you that if you are not on a team using Agile, you should leave and find one. Scrum and especially Agile encompass many elegant solutions to a wide variety of problems. If you are working on a team that has not encountered these problems the the team may never see the value in the solutions.   Having said all that, I'm not giving up on Agile or Scrum. I am convinced it is a better approach for software development. But reality is saying that its adoption is not straightforward and highly subject to disruption. Unless, that is, everyone really, really wants it.

    Read the article

  • Spawn a background process in Ruby

    - by Dave DeLong
    I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of: `/path/to/daemon1 &` `/path/to/daemon2 &` `/path/to/daemon3 &` However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8. I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit. So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)? Thanks!

    Read the article

  • is appassembler plugin broken for java service wrapper on windows 64bit?

    - by Paul McKenzie
    Hi I'm developing on 32bit windows and am using appassembler to create a java service wrapper assembly, and it works ok. But I need to also create a 64bit assembly for deployment to a dev server. In the following config I have substituted the 32bit platform with the 64bit, see the <includes> section. But it no longer places the wrapper jar and dll in the lib folder. If I omit the includes completely, I get linux, solaris, Mac OSX and Win32 libraries, but no win64. Anyone got this working? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.1-SNAPSHOT</version> <configuration> <target>${project.build.directory}/appassembler</target> <repositoryLayout>flat</repositoryLayout> <defaultJvmSettings> <initialMemorySize>256M</initialMemorySize> <maxMemorySize>1024M</maxMemorySize> </defaultJvmSettings> <daemons> <daemon> <id>MyApp</id> <mainClass>com.foo.AppMain</mainClass> <platforms> <platform>jsw</platform> </platforms> <generatorConfigurations> <generatorConfiguration> <generator>jsw</generator> <includes> <include>windows-x86-64</include> </includes> <configuration> <property> <name>set.default.REPO_DIR</name> <value>../../repo</value> </property> </configuration> </generatorConfiguration> </generatorConfigurations> </daemon> </daemons> </configuration> <executions> <execution> <goals> <goal>generate-daemons</goal> <goal>create-repository</goal> </goals> </execution> </executions> </plugin>

    Read the article

  • Java Application/Thread Server

    - by Manrico Corazzi
    I am looking for something very close to an application server with these features: it should handle a series of threads/daemons, allowing the user to start-stop-reload each one without affecting the others it should keep libraries separated between different threads/daemons it should allow to share some libraries Currently we have some legacy code reinventing the wheel... and not a perflectly round-shaped one at that! I thought to use Tomcat, but I don't need a web server, except maybe for the simple backoffice user interface (/manager/html). Any suggestion? Is there a non-web application server, or is there a better alternative to Tomcat (more lightweight, for example, or easier to configure)? Thanks in advance.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >