Search Results

Search found 31410 results on 1257 pages for 'disk based'.

Page 792/1257 | < Previous Page | 788 789 790 791 792 793 794 795 796 797 798 799  | Next Page >

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • can canonical links be used to make 'duplicate' pages unique?

    - by merk
    We have a website that allows users to list items for sale. Think ebay - except we don't actually deal with selling the item, we just list it for sale and provide a way to contact the seller. Anyhow, in several cases sellers maybe have multiple units of an item for sale. We don't have a quantity field, so they upload each item as a separate listing (and using a quantity field is not an option). So we have a lot of pages which basically have the exact same info and only the item # might be different. The SEO guy we've started using has said we should put a canonical link on each page, and have the canonical link point to itself. So for example, www.mysite.com/something/ would have a canonical link of href="www.mysite.com/something/" This doesn't really seem kosher to me. I thought canonical links we're suppose to point to other pages. The SEO guy claims doing it this way will tell google all these pages are indeed unique, even if they do basically have the same content. This seems a little off to me since what's to stop a spammer from putting up a million pages and doing this as well? Can anyone tell me if the SEO guy's suggestion is valid or not? If it's not valid, then do i need to figure out some way to check for duplicated items and automatically pick one of the duplicates to serve as an original and generate canonical links based off that? Thanks in advance for any help

    Read the article

  • How should I structure moving from overworld to menu system / combat?

    - by persepolis
    I'm making a text-based "Arena" game where the player is the owner of 5 creatures that battle other teams for loot, experience and glory. The game is very simple, using Python and a curses emulator. I have a static ASCII map of an "overworld" of sorts. My character, represented by a glyph, can move about this static map. There are locations all over the map that the character can visit, that break down into two types: 1) Towns, which are a series of menus that will allow the player to buy equipment for his team, hire new recruits or do other things. 2) Arenas, where the player's team will have a "battle" interface with actions he can perform, messages about the fight, etc. Maybe later, an ASCII representation of the fight but for now, just screens of information with action prompts. My main problem is what kind of design or structure I should use to implement this? Right now, the game goes through a master loop which waits for keyboard input and then moves the player about the screen. My current thinking is this: 1) Upon keyboard input, the Player coordinates are checked against a list of Location objects and if the Player coords match the Location coords then... 2) ??? I'm not sure if I should then call a seperate function to initiate a "menu" or "combat" mode. Or should I create some kind of new GameMode object that contains a method itself for drawing the screen, printing the necessary info? How do I pass my player's team data into this object? My main concern is passing around the program flow into all these objects. Should I be calling straight functions for different parts of my game, and objects to represent "things" within my game? I was reading about the MVC pattern and how this kind of problem might benefit - decouple the GUI from the game logic and user input but I have no idea how this applies to my game.

    Read the article

  • generating maps

    - by gardian06
    This is a conglomeration question when answering please specify which part you are addressing. I am looking at creating a maze type game that utilizes elevation. I have a few features I would like to have, but am unsure as to some of the implementation. I have done work doing fileIO maze generation (using a key to read the file, and then generate the level based on that file), but I am unsure how to think about this with elevation in the mix. I think height maps might be a good approach, but don't know how to represent them effectively. for a height map which is more beneficial XML(containing h[u,v] data and key definition), CSV (item1 is key reference, item2 is elevation), or another approach that I have not thought of yet? When it comes to placing the elevation values themselves what kind of deltah values are appropriate to have it noticeable at about a 60degree angle while not really effecting gravity driven physics (assuming some effect while moving up/down hill)? I am thinking of maybe going to procedural generation at some point, but am wondering if it is practical to have a procedurally generated grid (wall squares possibly same dimensions as the open space squares), or if designing to a thin wall open spaces is better? this decision will effect the amount of work need on the graphics end for uniform vs. irregular walls. EDIT: game will be a elevation maze shooter. levels/maps will be mazes with elevation the player has to negotiate. elevations will have effects on "combat" vision, and movement

    Read the article

  • Network connection fails when downloading data

    - by Guus
    In the past few weeks, I've noticed my network connection becoming unstable while downloading files. I'm not sure how to diagnose this issue. I've found that my network connections appear to be temporarily unresponsive (for periods up to a minute or so) while I'm downloading a file (that's large enough to not be downloaded instantly). The problem occurs when downloading data through a webbrowser, but also when using SCP to download data from a remote location. During the period in which network connections are unresponsive, every resource that I try to access over the network is unavailable. This includes: The download itself (SCP reports a 'stalled' download) Web pages (won't load - browsers report 'resource unavailable') SSH sessions (CLI freezes) VPN connections (connections terminate) IM client connections (client starts reconnection attempts) ... (everything is pretty much dead) I've noticed that during such a period, I cannot even access the (web-based) administrative console of the router on my LAN at home (although it remains reachable for other devices). The problem occurs when connected to my home network, but also when I'm in the office. Other devices than my laptop are not affected. Given the above two characteristics I assume the problem cause lies within my laptop, not the network infrastructure. I'm running Ubuntu 11.10. Apart from applying automatic updates from Ubuntu, I can't think of a change that I applied to the OS that could have started the problems. I'm absolutely positive that this issue did not occur up to a few weeks ago (as it's very noticeable, and only started to annoy me in the last few days). When the problem occurs, applications that make use of a network connection fail visibly (I get popups telling me that a VPN connection is broken, for instance). The network manager does not report any issues related to my wifi-connection though. Help?

    Read the article

  • Rails time stamps on images in CSS

    - by brad
    Just posted this on Stack but realized it may be more appropriate here: So Rails time stamping is great. I'm using it to add expires headers to all files that end in the 10 digit timestamp. Most of my images however are referenced in my CSS. Has anyone come across any method that allows for timestamps to be added to CSS referenced images, or some funky re-write rule that achieves this? I'd love for ALL images in my site, both inline and in css to have this timestamp so I can tell the browser to cache them, but refresh any time the file itself changes. I couldn't find anything on the net regarding this and I can't believe this isn't a more frequently discussed topic. I don't think my setup will matter because the actual expiring will hopefully happen the same way, based on the 10 digit timestamp, but I'm using apache to serve all static content if that matters

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • What causes memcache to delete keys?

    - by Arkaaito
    Our memcache install recently started removing keys, and we're not sure why. Large groups of keys vanish at the same time. Memcache reports that evictions are low to non-existent, and our app has no way to clear memcache (it can only delete specific keys). Even keys of which the app has no knowledge get deleted, so we're pretty convinced they're getting expired. However, our memcache configuration hasn't been touched in some time. Has anyone debugged an issue like this before, and if so, are there any steps you'd recommend we take? How flexible is memcache's expiration policy - is it possible that we're suddenly running into a criterion based on (say) write frequency to a key?

    Read the article

  • Latest update to Ubuntu 13.10 broke Intel graphics drivers

    - by James Davies
    I'm running a copy of Ubuntu 13.10 on an i7-4771 w/ Intel HD4600 Graphics using a Dell Ultrasharp 1440p monitor via Displayport. Up until today this configuration has been working perfectly, however the latest update appears to have broken my graphics configuration, and xorg is now refusing to go above 1280p resolution. Running xrandr it appears the driver incorrectly thinks my monitor is plugged into the HDMI port and is detecting a max resolution of 1920x1200 instead of 2560x1440. (It's actually plugged in via Displayport). Based on the apt history.log, the latest update was for the kernel. I'm presuming the issue is that the official Intel driver hasn't been updated to support this version? Is there any way to resolve this, or will I need to upgrade to 14.10 to get the latest driver from Intel? Start-Date: 2014-05-28 11:30:57 Commandline: aptdaemon role='role-commit-packages' sender=':1.473' Install: linux-image-extra-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-image-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-headers-3.11.0-22:amd64 (3.11.0-22.38), linux-headers-3.11.0-22-generic:amd64 (3.11.0-22.38)

    Read the article

  • Release: Oracle Java Development Kit 8, Update 20

    - by Tori Wieldt
    Java Development Kit 8, Update 20 (JDK 8u20) is now available. This latest release of the Java Platform continues to improve upon the significant advances made in the JDK 8 release with new features, security and performance optimizations. These include: new enterprise-focused administration features available in Oracle Java SE Advanced; products offering greater control of Java version compatibility; security updates; and a very useful new feature, the MSI compatible installer. Download Release Notes Java SE 8 Documentation New tools, features and enhancements highlighted from JDK 8 Update 20 are: Advanced Management Console The Java Advanced Management Console 1.0 (AMC) is available for use with the Oracle Java SE Advanced products. AMC employs the Deployment Rule Set (DRS) security feature, along with other functionality, to give system administrators greater and easier control in managing Java version compatibility and security updates for desktops within their enterprise and for ISVs with Java-based applications and solutions. MSI Enterprise JRE Installer Available for Windows 64 and 32 bit systems in the Oracle Java SE Advanced products, the MSI compatible installer enables system administrators to provide automated, consistent installation of the JRE across all desktops in the enterprise, free of user interaction requirements. Performance: String de-duplication resulting in a reduced footprint Improved support in G1 Garbage Collection for long running apps. A new 'force' feature in DRS (Deployment Rule Set) which allows system administrators to specify the JRE with which an applet or Java Web Start application will run. This is useful for legacy applications so end users don't need to approve security exceptions to run.  Java Mission Control 5.4 with new ease-of-use enhancements and launcher integration with Eclipse 4.4 JavaFX on ARM Nashorn performance improvement by persisting bytecode after inital compilation There's much more information to be found in the JDK 8u20 Release Notes.

    Read the article

  • Another Exchange 2003 to Exchange 2010 mail flow issue

    - by Ryan Roussel
    During a migration recently, we came across another internal mail routing issue.  The symptoms were identical to my previous post about Exchange internal mail routing.  Mail was flowing from 2010 to 2003, from 2010 to the internet, but not from 2003 to 2010.   I went through the normal check list looking at permissions, DNS, and the routing group connectors.  I verified that both servers listed in the routing group connectors were the routing master in their respective routing groups through the 2003 ESM.  I also verified that inheritable permissions were enabled for the Exchange 2003 server object in the schema.  No luck with either.   For my previous post about this issue in which inheritable permissions were the culprit: Exchange 2010, Exchange 2003 Mail Flow issue   And for Routing Group issues: Exchange 2007 Routing Group Connector Mayhem   I finally enabled logging on the SMTP virtual server on Exchange 2003 and the Default Receive Connector on 2010 and sent a few test e-mails where I found 2003 was having issues authenticating to 2010.  By default 2003 uses Exchange Server Authentication to communicate to 2010. The exact error was: 4.7.0 Temporary Authentication Failure which was found in the SMTP logs on the Exchange 2003 side   After scouring based on this error, I found the solution:   The Access this computer from the network user rights in the local computer policy on the Exchange 2010 server were changed from the default.  The network administrator had modified the Default Domain policy and changed this user right assignment to only list Domain Users.   The fix was to clear this setting in the Default Domain policy,  force gpupdate to refresh the group policy settings, then ensure the appropriate users and groups were listed.   This immediately fixed the problem and the Exchange 2003 server was able to route mail to the Exchange 2010 mailboxes.   The default user rights assignments for Access this computer from the network On Workstations and Servers: Administrators Backup Operators Power Users Users Everyone On Domain Controllers: Administrators Authenticated Users Everyone More can be found here: http://technet.microsoft.com/en-us/library/cc740196(WS.10).aspx

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Kill Leaking Connections on SQL Server 2005

    - by Thierry Brunet
    We have a legacy ASP application that somewhere leaks SQL Connections. In Activity Monitor, I can see a bunch of idle processes with Last Batch times over an hour old. When I look at the T-SQL command batch, these are always FETCH API_CURSORXXX, which from my understanding is caused by improperly closed ASP ADO Recordsets. While we are try to pinpoint the offeding code, is there a way for me to monitor which requests open which cursors? I'm assuming profiler, but I'm not sure what I should be monitoring exactly. I can see a bunch of calls to sp_cursoropen but I don't see the API_CUSORXXX name anywhere. Second, would anyone be able to suggest a script we could run to kill these processes based on the Last Batch time 10 minutes and Last Batch Command being FETCH API_CURSORXXX? For various reasons, we unfortunately don't have any SQL Server DBAs.

    Read the article

  • Build a user's profile directory on creation in batch

    - by Moses
    I have a batch script that I use when I set up new Windows 7 PCs that creates a user based on a variable, creates a folder on their desktop, then shares it: @echo off SET /p unitnumber="Enter unit number: " net user unit%unitnumber% password /add /expire:never MD "C:/Users/unit%unitnumber%/Desktop/Accounting #%unitnumber%" runas /user:administrator "net share "Accounting#%unitnumber%"="C:/Users/unit%unitnumber%/Desktop/Accounting#%unitnumber"" I discovered that the share that is created is overwritten when the newly created user first logs on, because Windows creates builds their profile directory at that time. Is there any way to initiate a build of a user's profile directory in the batch file just after creating the it? The only thing that looks useful is the /homedir:pathname switch for the net user command, but I believe that option assumes the directory already exists. Other than that web research hasn't been fruitful. I'd be to use whatever to get this done as long as I can incorporate/launch it from the batch. Any suggestions?

    Read the article

  • Identifying the version of ATI Catalyst drivers

    - by Snark
    I have bought a new video card based on the ATI Radeon HD 5670 chipset. I couldn't make it work with the latest ATI Catalyst drivers found on their website, only the drivers found on the CD delivered with the card worked. How can I know the version of Catalyst that is installed on my PC (running Windows 7 64-bits)? The ATI Catalyst Control Center returns the following information: Driver Packaging Version 8.673-091110a-092263C Provider ATI Technologies Inc. 2D Driver Version 8.01.01.973 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/CLASS/{4D36E968-E325-11CE-BFC1-08002BE10318}/0000 Direct3D Version 8.14.10.0708 OpenGL Version 6.14.10.9120 Catalyst™ Control Center Version 2009.1110.2225.40230 I do not recognize anything pointing to a "marketing version". The website says the current version of Catalyst is 10.2.

    Read the article

  • Huge spike in traffic tracked by Google Analytics from Safari browsers

    - by Petra Barus
    My site urbanindo.com recently experienced huge spike in traffic tracked by Google Analytics. The huge spike sometime shows in several same pages. This is odd because I rarely experienced that much traffic before. Some pages can have hundreds of visitors at the same time. But when I read the webserver log, those pages only showed up in only one or two entries, not hundreds like the GA showed. But the only similar thing about the entries is that they are using similar browser agent. Mozilla/5.0 (iPad; CPU OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 Mozilla/5.0 (iPhone; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3 And when I opened the Google Analytics Audience Technology Browsers & OS and I plotted the chart based on the browsers, I saw that the huge spike came from Safari. This huge spikes started to happen since the beginning of this August, which happens to be when I use multiple webservers behind load balancer (although I'm pretty sure those two are not relevant). Is there something wrong with my GA configuration?

    Read the article

  • Memory is free, but still swapping?

    - by japancheese
    Hello, I'm sure this is a pretty basic question, but I'm just trying to get a grasp of what's going on with my Ubuntu (Hardy Herron) server (running a Rails-based site). It seems that I have free memory available, yet the system is reporting that it is still swapping memory (unless I'm reading this incorrectly?). Here is the "free -m" output total used free shared buffers cached Mem: 1024 905 118 0 33 409 -/+ buffers/cache: 462 561 Swap: 2047 95 1952 Could anyone explain to me some possible reasons that it is maintaining 95mb of swap at all times (it is never less)? I'm just looking for some leads on things I could check out that would explain to me exactly how memory is utilized in Linux.

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • Overloading methods that do logically different things, does this break any major principles?

    - by siva.k
    This is something that's been bugging me for a bit now. In some cases you see code that is a series of overloads, but when you look at the actual implementation you realize they do logically different things. However writing them as overloads allows the caller to ignore this and get the same end result. But would it be more sound to name the methods more explicitly then to write them as overloads? public void LoadWords(string filePath) { var lines = File.ReadAllLines(filePath).ToList(); LoadWords(lines); } public void LoadWords(IEnumerable<string> words) { // loads words into a List<string> based on some filters } Would these methods better serve future developers to be named as LoadWordsFromFile() and LoadWordsFromEnumerable()? It seems unnecessary to me, but if that is better what programming principle would apply here? On the flip side it'd make it so you didn't need to read the signatures to see exactly how you can load the words, which as Uncle Bob says would be a double take. But in general is this type of overloading to be avoided then?

    Read the article

  • How to Add a Taskbar to the Desktop in Ubuntu 14.04

    - by Lori Kaufman
    If you’ve switched to Ubuntu from Windows, it may take some time to get used to the new and different interface. However, you can easily incorporate a familiar Windows feature, the Taskbar, into Ubuntu to make the transition easier. A tool called Tint2 provides a bar at the bottom of the Ubuntu Desktop that resembles the Windows Taskbar. We will show you how to install it and make it start every time you log into Ubuntu. NOTE: When we say to type something in this article and there are quotes around the text, DO NOT type the quotes, unless we specify otherwise. Press Ctrl + Alt + T to open a Terminal window. To install Tint2, type the following line at the prompt and press Enter. sudo apt-get install tint2 Type your password at the prompt and press Enter. The progress of the installation displays and then a message displays saying how much disk space will be used. When asked if you want to continue, type a “y” and press Enter. When the installation has finished, close the Terminal window by typing “exit” at the prompt and pressing Enter. Click the Search button at the top of the Unity bar. Start typing “startup applications” in the Search box. Items that match what you type start displaying below the Search box. When the Startup Applications tool displays, click the icon to open it. On the Startup Applications Preferences window, click Add. On the Add Startup Program dialog box, enter a name for the startup application. This name displays in the list on the Startup Applications Preferences window. Type “tint2” in the Command edit box, enter a description in the Comment edit box, if desired, and click Add. Tint2 is added as a startup program and will start every time you log into Ubuntu. Click Close to close the Startup Applications Preferences window. Log out and log back in to make the Taskbar available on the desktop. You do not need to reboot the computer for this change to take effect. Now, when you minimize a program, an icon for it displays on the Taskbar at the bottom of the screen, just like the Taskbar in Windows. If you decide that you don’t want the Taskbar to display every time you log into Ubuntu, you can uncheck the Tint2 startup program on the Startup Applications Preferences window. You don’t need to delete it from the list.

    Read the article

  • Views : ViewControllers, many to one, or one to one?

    - by conor
    I have developed an Android application where, typically, each view (layout.xml) displayed on the screen has it's own corresponding fragment (for the purpose of this question I may refer to this as a ViewController). These views and Fragments/ViewControllers are appropriately named to reflect what they display. So this has the effect of allowing the programmer to easily pinpoint the files associated with what they see on any given screen. The above refers to the one to one part of my question. Please note that with the above there are a few exceptions where very similar is displayed on two views so the ViewController is used for two views. (Using a simple switch (type) to determine what layout.xml file to load) On the flip side. I am currently working on the iOS version of the same app, which I didn't develop. It seems that they are adopting more of a one-to-many (ViewController:View) approach. There appears to be one ViewController that handles the display logic for many different types of views. In the ViewController are an assortment of boolean flags and arrays of data (to be displayed) that are used to determine what view to load and how to display it. This seems very cumbersome to me and coupled with no comments/ambiguous variable names I am finding it very difficult to implement changes into the project. What do you guys think of the two approaches? Which one would you prefer? I'm really considering putting in some extra time at work to refactor the iOS into a more 1:1 oriented approach. My reasoning for 1:1 over M:1 is that of modularity and legibility. After all, don't some people measure the quality of code based on how easy it is for another developer to pick up the reigns or how easy it is to pull a piece of code and use it somewhere else?

    Read the article

  • does it still have any sense to directly drop mails that trigger RBLs?

    - by Luke404
    Once upon a time, using RBLs to drop mails was actually a good idea. These days seems it is no more possible for a reason or the other, so every one switched / is_switching to just use RBLs as another test in score based antispam solutions (read: SpamAssassin & friends). This gives good results, but neglects one of the benefits of RBLs, namely the ability to reject (supposed) spam before even receiving the message body. Is still there any RBL that makes sense to use that way, to hardly reject anything that fires a match in that list? If there are people doing it that way, do you ever get false positives due to the list?

    Read the article

  • links for 2010-06-15

    - by Bob Rhubart
    You're invited : Oracle Solaris Day, June 28th, Herzliya - Openomics How open innovation and technology adoption translates to business value, with stories from our developer support work at Sun ISV Engineering (tags: ping.fm) Edwin Biemond: Enriching and Forwarding your data with the Spring Component in SOA Suite 11g PS2 Oracle ACE Edwin Biemond describes "how easy it is to use Java in the Spring Component, how you can wire this Component to other Components, Services or References adapters." (tags: oracleace soa oracle middleware) Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 - Currency Conversions & FX Translations &ndash; Part 1 "As part of the BI EE setup we need to ensure that such local currency transactions are converted to a common reporting currency," says Rittman Mead's Venkatakrishnan. (tags: oracle businessintelligence) Richard Veryard: Ecosystem SOA 2 "What are the problems of large complex sociotechnical systems?" asks Rich Veryard?  "How far do SOA and enterprise architecture help to address this problem space, and what else might we need?" (tags: soa entarch) Khanderao Kand: Oracle BPM Suite .. unified engine.. "This Suite is based on unified process foundation of Oracle Business Process Management Suite 11g . It has the same engine that executes both BPEL and BPMN processes, " says Kand.  (tags: bpel soa bpm oracle) Webcast: Revealing the Secrets that will Re-Energize your Services Strategies  Oracle's Peter Heller and Robert Covington discuss how to overcome the many unforeseen technical and organizational barriers in order to meet the high expectations of dynamic business requirements in this live webcast, July 14, 2010, 9:00 AM PDT / Noon EDT (tags: entarch oracle webcast)

    Read the article

  • How to get started setting up IP security cameras

    - by dave
    I have just finished renovating my house. As part of the job, I have cat6 cable run through the house, including two external plugs. All cables terminate in the same location. My rough plan is to plug two IP cameras to monitor the front and rear, run POE from a central router to the two external cameras, plug my PC into the same router and run magic software X. Any machine plugged into the router or wirelessly connection should then be able to get a live feed and alerts based on motion detection. That's the plan, but I'm not sure how possible it is. What hardware to get and what monitoring software to get. Has anyone does something similar? What were your experiences?

    Read the article

< Previous Page | 788 789 790 791 792 793 794 795 796 797 798 799  | Next Page >