Search Results

Search found 6053 results on 243 pages for 'solutionsfactory usage'.

Page 145/243 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Building a linux system

    - by webyankee
    I am worried about hardware compatibility. I have several older PCs with various hardware and wish to install Linux onto them. I have several ideas about what I would like to do. first, I am a novice and know just enough to get me into trouble in a lot of areas. I can not find adequate descriptions of the usage between a desktop and a server version of Linux. When would you want to choose to build a server instead of a desktop and can you change a desktop to a server if you need higher functionality? I wonder if I should use 32 or 64 bit? I believe 32 bit on older (P1 or P2 systems) would be the safe way to go. what is the extent can these systems be used? Can they used to play high end graphics on-line games or just simple browsing and word processing? How do I determine what programs the system can use? I have pondered on the idea of linking several systems together to make one big computer. I know this can be done with some functionality improvement. Any Ideas about this?

    Read the article

  • How do I source a shell script for Node Version Manager?

    - by Matthew Patrick Cashatt
    Hi and thanks for looking! I am new to Linux/Ubuntu, but I have set up an Ubuntu box on which to run Node.js. I have had moderate success, but now I need to be able to easily upgrade my version of Node. Many folks recommend using Node Version Manager. I followed the directions, but when I try to do something like this: nvm ls I get a messaging stating that No command NVM found I have gone back to check the steps I followed to install NVM, but there is one part that is tricky for may and I think to be the culprit: sourcing the file for bash. From the instructions: To activate nvm, you need to source it from your bash shell . ~/nvm/nvm.sh I always add this line to my ~/.bashrc or ~/.profile file to have it automatically sources upon login. Often I also put in a line to use a specific version of node. So which file should I add this to? I am guessing profile since it's ubuntu?? Also, where in the file do I add this line? After I have added this line, do I need to reboot or anything? Any help would be deeply appreciated--especially if you can show me an example profile file with . ~/nvm/nvm.sh integrated so that I can see usage. Thanks, Matt

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • Download the Anime Angels Theme for Windows 7

    - by Asian Angel
    Do you have a passion for all things anime? Then you will definitely want to have a look at the Anime Angels Theme for Windows 7. This cute theme will give your desktop that extra bit of fun and spunk to help bring a smile to your face. The theme comes with 21 Hi-Res wallpapers of the cutest Anime Angels from around the web, a wonderful set of anime icons, and great system sounds to round out the perfect anime theme. Anime Angels Theme For Windows (Anime Themes) [VikiTech] Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • How Security Products Are Made; An Interview with BitDefender

    - by Jason Fitzpatrick
    Most of us use anti-virus and malware scanners, without giving the processes behind their construction and deployment much of a thought. Get an inside look at security product development with this BitDefender interview. Over at 7Tutorials they took a trip to the home offices of BitDefender for an interview with Catalin Co?oi–seen here–BitDefender’s Chief Security Researcher. While it’s notably BitDefender-centric, it’s also an interesting look at the methodology employed by a company specializing in virus/malware protection. Here’s an excerpt from the discussion about data gathering techniques: Honeypots are systems we distributed across our network, that act as victims. Their role is to look like vulnerable targets, which have valuable data on them. We monitor these honeypots continuously and collect all kinds of malware and information about black hat activities. Another thing we do, is broadcast fake e-mail addresses that are automatically collected by spammers from the Internet. Then, they use these addresses to distribute spam, malware or phishing e-mails. We collect all the messages we receive on these addresses, analyze them and extract the required data to update our products and keep our users secure and spam free. Hit up the link below for the full interview. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • How to refactor when all your development is on branches?

    - by Mark
    At my company, all of our development (bug fixes and new features) is done on separate branches. When it's complete, we send it off to QA who tests it on that branch, and when they give us the green light, we merge it into our main branch. This could take anywhere between a day and a year. If we try to squeeze any refactoring in on a branch, we don't know how long it will be "out" for, so it can cause many conflicts when it's merged back in. For example, let's say I want to rename a function because the feature I'm working on is making heavy use of this function, and I found that it's name doesn't really fit its purpose (again, this is just an example). So I go around and find every usage of this function, and rename them all to its new name, and everything works perfectly, so I send it off to QA. Meanwhile, new development is happening, and my renamed function doesn't exist on any of the branches that are being forked off main. When my issue gets merged back in, they're all going to break. Is there any way of dealing with this? It's not like management will ever approve a refactor-only issue so it has to be squeezed in with other work. It can't be developed directly on main because all changes have to go through QA and no one wants to be the jerk that broke main so that he could do a little bit of non-essential refactoring.

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • MvcReportViewer v.0.4.0 is available!

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2014/06/04/mvcreportviewer-v.0.4.0-is-available.aspxToday I released new version of MvcReportViewer. This release contains mostly bug fixes reported by library users. I am glad to see that Open Source model works and people try to contribute to the project! Thank you everybody for your bug repots and help with the project. Version 0.4.0 Added support for ASP.NET MVC 5 Removed jQuery dependency. I have not tested it on IE8 or earlier versions. Any help with testing is welcome! Fixed problem with SSRS keep-alive cookies. Keep-alive cookies are issued every time a report is opened during a browser session. Many people don't restart their browsers and in my case, Chrome doesn't get rid of the cookie session data on close - had to manually delete them for the reports to start working again. I added KeepSessionAlive control settings to manage SSRS keep-alive behavior. It is set to false by default to fix Bad Request 400: Request Too Long issue. You can find usage example in Fluent.cshtml. Fixed the bug when ReportViewer Control parameters was not parsed when ShowParameterPrompts parameter had not been set. Changed public static MvcReportViewerIframe MvcReportViewer method to use IEnumerable<KeyValuePair<string, object>> reportParameters instead of simple object. The reason is users reported that they mostly use multiple report parameters’ values. Added support for SSRS hosted on Windows Azure. Users should set MvcReportViewer.IsAzureSSRS property to true in Web.config to use Windows Azure authentication. I do not have Windows Azure SSRS and build the code using http://msdn.microsoft.com/en-us/library/gg552871.aspx#Authentication article. It would be nice if somebody from community tested the change or provided me a test report on Windows Azure for testing purposes.

    Read the article

  • USB and CD data cannot be read or mounted in 12.10

    - by aravind4j
    I'm using Ubuntu 12.10. I wrote a data disk using Brasero Disk Burner, but the system cannot read the CD. I tried to write the same data into my HP v220w pen drive but now there is an error in that too. It shows the following: Error mounting /dev/sdb at /media/aravind4j/Aravind4j: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdb" "/media/aravind4j/Aravind4j"' exited with non-zero exit status 13: $MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sdb': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. (udisks-error-quark, 0) I used a NTFS file system for the pen drive as you can recognize from the above statement. I would like to recover the files so please help me recover the files without formatting the drive.

    Read the article

  • PHP W3 Validator API, Is this good? [closed]

    - by Josh Purcell
    I was trying to find a way to see if my site's code was valid or not but I continuously going over to W3 Validator so I decided to make an "API" however it really isn't! I just wanted to know if anybody can find a better solution to the one I have made. This is what I currently use, with the usage of ?uri=http://www.mydomain.com : <?php if(!$_GET['uri']) { echo "No URI!"; } else { $CheckURI = "http://validator.w3.org/check?uri=".$_GET['uri']; $URL = file_get_contents($CheckURI); $Start = strpos($URL, "<title>") + 7; $End = strpos($URL, "</title>"); $Title = substr($URL, $Start, $End-$Start); if(preg_match('[Invalid]',$Title)) { //Code is INVALID echo "<a href='$CheckURI' title='This is not good!' target='_BLANK'>INVALID Source</a>"; } elseif(preg_match('[Valid]',$Title)) { //Code is VALID echo "<a href='$CheckURI' title='Check It Yourself!' target='_BLANK'>Valid Source</a>"; } else { //It Went WRONG echo ""; } }

    Read the article

  • Why does working processors harder use more electrical power?

    - by GazTheDestroyer
    Back in the mists of time when I started coding, at least as far as I'm aware, processors all used a fixed amount of power. There was no such thing as a processor being "idle". These days there are all sorts of technologies for reducing power usage when the processor is not very busy, mostly by dynamically reducing the clock rate. My question is why does running at a lower clock rate use less power? My mental picture of a processor is of a reference voltage (say 5V) representing a binary 1, and 0V representing 0. Therefore I tend to think of of a constant 5V being applied across the entire chip, with the various logic gates disconnecting this voltage when "off", meaning a constant amount of power is being used. The rate at which these gates are turned on and off seems to have no relation to the power used. I have no doubt this is a hopelessly naive picture, but I am no electrical engineer. Can someone explain what's really going on with frequency scaling, and how it saves power. Are there any other ways that a processor uses more or less power depending on state? eg Does it use more power if more gates are open? How are mobile / low power processors different from their desktop cousins? Are they just simpler (less transistors?), or is there some other fundamental design difference?

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • Visual WebGui launches a new prize-winning challenge for developers

    - by Webgui
    Gizmox is announcing a ListView Challenge where developers can participate by creating and submitting their own implementations of the new extended ListView. "its quite amazing what you can do with it. It opens a lot of new ways to present data in a better and more userfriendly way," says one of the VWG community members who built a three level hierarchal ListView. Watch the hierarchal ListView demo by Visualizer Those ListView implementations will be reviewed and rated and the winner will win a free Professional Studio license $750 worth. The 5 top rated codes will entitle their developers for a cool new T-shirt. The new v6.4 introduces new capabilities with its extended ListView Control. Enter the Challenge The Collapsible Panel enhancement of the ListView Control, along with the Column Type Control, open up the possibilities for potential usage of the ListView control for data display, data entry and as the Collapsible Panel can contain whatever control you like, it can as well contain other ListView controls, thus making it possible to create Hierarchial ListView display of unlimited number of levels. The first enhancement is the introduction of a new column type Control which opens up the possibility for a ListView cell to contain controls like CheckBox, ComboBox, ListBox or even TabControl, Form or another ListView as the contents of that particular cell. This means that the ListView is no longer a display-only control, but has the full potential of being a full blown data entry control as well. The second major enhancement is the introduction of ListViewPanelItem. The ListViewPanelItem behaves exactly the same as it‘s predecessor, the ListViewItem, and in additon it has a Panel Control attached to it, seperate panel for each row in the ListView. This new Panel can be either expanded (visible) or not (hidden) and when expanded, will fill the full width of the ListView, but has adjustable height. Watch a webcast about the extended ListView

    Read the article

  • Digital HD Transition

    - by Bill Evjen
    The HD Experience Roughly 53% of the viewing public has HD capable devices in their home 24% think they are watching HD while they have no subscription to any HD content Today’s HD Considerations Choices abound: format resolution – 720p, 1080i/p frame rates compression and wrapping audio compression and delivery metadata packaging, delivery, and usage content delivery protocols Metadata is going to be a part of the overall experience With emerging technologies: Super Hi-Vision (SHV, UHDTV 4320p), 3D HEVC/H.265, WEBM/VP8 HDBaseT, P2PTV Dolby Pulse/HE-AAC Industry standardization Metadata registration, packaging, and delivery standards Improved picture and sound quality is a logical next step but we need to also think about the end to end viewing experience including; 3D video and audio content Mixed-mode viewing to bring interactive and immersive experiences Content Transportability both on-to and off-of the aircraft High Definition Standardization Analog switch off around the world DTV transition completed: 17 countries DTV transition in progress: 45 countries The EU has mandated the end of 2012 as the final date for Analog Switch Off D-Cinema was standardized by SMPTE in 2006 Airlines are installing HD displays today Passengers are bringing their own devices now HD TV on airlines are getting bigger and bigger – bigger than SD was – now up to 23” Gray scale data input for color – 6 to 8 bit Contrast – 400 to 700 Backlit – LED Encryption – can it be the same for HD? PPV in the cabin?

    Read the article

  • Video lags/freezes in SMPlayer and VLC

    - by RanRag
    When I try to play my video files in SMPlayer it works fine but as soon as I switch to fullscreen mode(16:9) following thing happens: 1) Video starts lagging. 2) Audio and video goes out of sync. 3) CPU usage rises to ~50%. 4) SMPlayer starts to hang. My current SMPlayer configuration: 1)Video Output Driver = x11(slow) 2)Audio Output Driver = alsa(0.0-HDA Intel) 3)Cache = 8192 KB 4)Threads for decoding(MPEG-1/2 and H.264 only = 2 Things I tried solve this problem: 1) Tried changing video o/p driver to xv,gl. 2) Tried changing audio o/p driver to pulse. 3) Tried increasing cache size and also tried using nocache. Everything works fine on windows but I don't want to switch to windows just to play video files. My system config: Acer Aspire One D270 Atom N2600(Cedar Trail) 1.6GHz 2GB Memory Intel GMA 3600 graphics. Ubuntu 12.04 Kernel Release: 3.2.0-23-generic-pae Rest all things are working fine I have no resolution issue, bluetooth, wireless also working fine. Just ask me to submit any other log file I will be happy to post. SMPlayer log MPlayer Terminal output Codec Information(currently playing file):

    Read the article

  • How can I deal with file association in different application(not in Nautilus)?

    - by Julius
    Maybe i don't understand the system. Upgraded to (reinstalled) Ubuntu 11.04. Is there any way the applications can use something that i set in nautilus, or it's just a wrong idea about the usage? In nautilus the file association works great, easy, handy and so on... My first problem was when installed chromium. Downloaded a file, a popup ask for assocation, set nautilus. And it only opens folder, for any file show an error: it is not a direcrory. Ok, so i thought Google chrome changed, because previously .pdf open acrobat,.torrent open vuze and so on. But now i have to open nautilus on download folder with it, than select and open the preferable application by manually and can't use any automatism i used to. Then in gnome commander , it not followed the default association i set in nautilus. Ok maybe it is commander fault. use it's own. Then in calibre, the "read" use again this default "can't open its not a directory" error So it's seems to me the applications not using well this file associations or i really don't understand the aim of file associations system (mime,.desktop files,...) If there is no solution i think i have to search some program (if any exist's) which can identify and launch application and set as default instead nautilus.

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Help me choose an Open-Source license

    - by Spartan-117A
    So I've done lots of open-source work. I have released many projects, most of which have fallen under GPL, LGPL, or BSD licensing. Now I have a new project (an implementation library), and I can't find a license that meets my needs (although I believe one may exist, hence this question). This is the list of things I'm looking for in the license. Appropriate credit given for ALL usage or derivative works. No warranty expressed or implied. The library may be freely used in ANY other open-source/free-software product (regardless of license, GPL, BSD, EPL, etc). The library may be used in closed-source/commercial products ONLY WITH WRITTEN PERMISSION. GPL - Useless to me, obviously, as it completely precludes any and all closed-source use, violating requirement (4). BSD/LGPL/MIT - Won't work, because they wouldn't require closed-source developers to get my permission, violating requirement (4). If it wasn't for that, BSD (FreeBSD in particular) would look like a good choice here. EPL/MPL - Won't work either, as the code couldn't be combined with GPL-code, therefore violating requirement (3). Also I'm pretty sure they allow commercial works without asking permission, so they don't meet (4) either. Dual-licensing is an option, but in that case, what combination would hold to all four requirements? Basically, I want BSD minus the commercial use, plus an option to use in commercial/closed-source as long as the developer has my written permission. EDIT: At the moment, thinking something like multiple-licensing under GPL/LGPL plus something else for commercial?

    Read the article

  • How do I debug an overheating problem?

    - by Tab
    Hello guys. I have a problem with my Laptop (Dell Inspiron 1564 Core i5 4GB Ram VGA ATI Mobility Radeon HD 4300 running Ubuntu 10.10 32bit). It shuts down abruptly without even a lag in the application I am working with before shutdown. I think it's overheating problem. Actually the laptop is hot all the time when I am running Ubuntu. When I switch back to windows, even with intense load it won't shutdown or show any problem as long as I keep proper ventilation (when the air openings are blocked it does the same). Actually on Ubuntu i don't usually do things that need much CPU power, usually surfing internet, coding web pages and sometimes playing with python and ruby. I am not enabling desktop effects so no GPU load except the normal GNOME gui. Now as I am writing the Processor load in the panel monitor applet is 0%, Memory 11% by programs, 22% by cache. And i have CPU Frequency monitor for each of the 4 cores set to 1.20 Ghz (the lowest possible value, i am not sure if this applet does really limit CPU usage). Running sensors in terminal gave me temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) hddtemp /dev/sda at the terminal gave me /dev/sda: WDC WD3200BEVT-75ZCT2: 46°C All that fine but the laptop is Really hot i can feel it in the keyboard, mouse pad is painful to touch, and the fan is always spinning. I am also placing 2 small fans running on USB under the laptop right now and the laptop is lifted over the fans so it's well ventilated. When I am running windows it doesn't get that hot except when there is a really big load on the CPU and this is keeping me away from using Linux for everyday tasks. Actually I don't care much for speed as I can deal with low speed it's not going to shutdown abruptly. So please if you can help me and tell me what are the possible causes, where should I start ?

    Read the article

  • canvas tile grid, hover effects, single tilesheet, etc

    - by user121730
    I'm currently in the process of building both the client and server side of an html5, canvas, and WebSocket game. This is what I have thus far for the client: http://jsfiddle.net/dDmTf/7/ Current obstacles The hover effect has no idea what to put back after the mouse leaves. Currently it's just drawing a "void" tile, but I can't figure out how to redraw a single tile without redrawing the whole map. How would I go about storing multiple layers within the map variable? I was considering just using a multi-dimensional array for each layer (similar to what you see as the current array), and just iterating through it, but is that really an efficient way of doing it? Side note The tile sheet being used for the jsfiddle display is only for development. I'll be replacing it as things progress in the engine. I hope you can help! Hopefully you guys can help me, I've been struggling to get through things, since I'm learning how things kind of stuff works as I go. If you guys have any pointers for my JavaScript, feel free. As I'm more or less learning advanced usage as I go, I'm sure I'm doing plenty of things wrong. Note: I will continue to update this post as the engine improves, but updating the jsfiddle link and updating the obstacles list by striking things that have been solved, or adding additions. Thanks!

    Read the article

  • Providing SSH tunnling, what to think about when configuring Ubuntu Server

    - by bigbadonk420
    Recently I've considered, mostly as a pet project, to set up accounts for a closed group of users via SSH to my box with the purpose of SSH tunnling things like web traffic -- some of it for friends that live abroad and perhaps also to help some people bypass national censorship. There's some things I imagine that I need to do, such as: Disabling shell access by setting the shell to /bin/false or similar. Get some software that can track bandwidth usage on a per-user basis historically Make sure that each user can only use a certain amount of bandwidth. The reason I'm posting here to begin with is to look around and get some pointers regarding what kind of things I should read up on, as well as hearing if there are any software recommendations for doing what I'm trying to do. I already know a bit since I've actually gotten SSH tunnling up and running already, I just don't feel like letting it loose to other people without restrictions and some basic monitoring. I'm primarily trying to learn here, so if you think this is a Very Bad Idea (or if you have a better idea on how to do this) then by all means say so, but please include some information on how to do it :) (I'm also open to trying things like OpenVPN but it seems really hard to set up, also I've heard SSH more often works in locked down environments)

    Read the article

  • Congratulations to 2012 Innovation Award winners in BPM category

    - by Manoj Das
    Last year many of our customers went live on BPM 11g. It is my extreme pleasure to congratulate two of them – Amadeus and Navistar – for being awarded Oracle Fusion Middleware Innovation Award at Oracle OpenWorld 2012. We invited our customers to submit their most innovative BPM implementations that have delivered substantiated value to them. This year we saw more than 20 submissions from our customers seeing significant business value from their live BPM 11g deployments. The submissions came from across the world, spanning various industry verticals including manufacturing, healthcare, logistics, Hi-Tech, Public Sector, Education and covering many process usage patterns. Award submissions were evaluated based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Amadeus Team Receiving Innovation Award from Hasan Rizvi Congratulations to Amadeus and Navistar and their teams on being recognized from among some very strong submissions and more importantly for the business value delivered. It is an honor to be part of your success and to play a small role in the innovation you drive. Navistar is a leading truck manufacturing company which produces International® brand commercial and military trucks, MaxxForce® brand diesel engines, IC Bus™ brand school and commercial buses, and Navistar RV brands of recreational vehicles. The company also provides truck and diesel engine service parts. Amadeus is a leading transaction processor for the global travel and tourism industry, providing transaction processing power and technology solutions to both travellers and travel providers. Both Navistar and Amadeus have leveraged Oracle BPM Suite to improve visibility into their business and made their business more agile and efficient. We congratulate them again and wish them continued success in their business and future BPM initiatives.

    Read the article

  • PDFtk Password Protection Help

    - by Dave W.
    I am using Ubuntu 11.10 and am looking for a solution to password protect a bunch of pdf files in a directory in batch. I came across PDFtk and it looks like it might do what I need, but I've reviewed the command line PDFtk examples and can't figure out if there is a way to do it in batch without having to individually specify the output file name for every file. I'm hoping a command-line guru can take a look at the PDFtk syntax and tell me if there is some trick / command that will allow me to password protect a directory of pdf files (e.g., *.pdf) and overwrite the existing files using the same name, or consistently rename the individual output files without having to specify each output name individually. Here's a link to the PDFtk command line examples page: http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/ Thanks for your help. I think I've answered my own question. Here's a bash script that appears to do the trick. I'd welcome help evaluating why the code I've commented out doesn't work... #!/bin/bash # Created by Dave, 2012-02-23 # This script uses PDFtk to password protect every PDF file # in the directory specified. The script creates a directory named "protected_[DATE]" # to hold the password protected version of the files. # # I'm using the "user_pw" parameter, # which means no one will be able to open or view the file without # the password. # # PDFtk must be installed for this script to work. # # Usage: ./protect_with_pdftk.bsh [FILE(S)] # [FILE(S)] can use wildcard expansion (e.g., *.pdf) # This part isn't working.... ignore. The goal is to avoid errors if the # directory to be created already exists by only attempting to create # it if it doesn't exists # #TARGET_DIR="protected_$(date +%F)" #if [ -d "$TARGET_DIR" ] #then #echo # echo "$TARGET_DIR directory exists!" #else #echo # echo "$TARGET_DIR directory does not exist!" #fi # mkdir protected_$(date +%F) for i in *pdf ; do pdftk "$i" output "./protected_$(date +%F)/$i" user_pw [PASSWORD]; done echo "Complete. Output is in the directory: ./protected_$(date +%F)"

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • How-To Geek is Hiring a Geeky Writer – Here Are the Details

    - by The Geek
    Think you have the perfect combination of geek knowledge and writing skills? We’re looking for an experienced writer to join our team, and here are all the details. We need a new writer to cover topics surrounding Windows 7 or 8, home networking, home routers, security, media, troubleshooting, mobile devices, and many similar topics. We are not looking for writers that focus solely on Linux or tech news writers. Please apply if you have the following qualities: You must be a geek at heart. You must be able to put in plenty of time, work, and dedication. If you’re too busy already, don’t apply. You must be able to write articles that are easy to understand. You must be creative. You must generate ideas for articles on your own, and take suggestions like a pro. You must be at least 18 years old. You must have solid English writing skills. You must be able to write tips, how-to articles, explainers, guides, instructional articles, etc. Again, we are not looking for tech news writers. Here’s a couple of our previous articles so you can get an idea of what we’re looking for in terms of quality and content. Please make sure to look through these before you decide to apply. How-to Article: Make Your Own Windows 8 Start Button with Zero Memory Usage Explainer: HTG Explains: When Do You Need to Update Your Drivers? Explainer: HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? How-To Article: How to Factory Reset Your Android Phone or Tablet When It Won’t Boot How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >