Search Results

Search found 428 results on 18 pages for 'ami winter'.

Page 16/18 | < Previous Page | 12 13 14 15 16 17 18  | Next Page >

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • Friday Fun: The Search For Wondla

    - by Asian Angel
    The best day of the week is finally here again, so it is time to have some fun while waiting to go home for the weekend. The game we have for you today takes you far into humanity’s future where you journey with Eva Nine in her quest to find other humans. Note: Today’s game comes with a double bonus! First, there is a sequel game that you can move on to once you have completed the first one. Second, there are three wallpapers available in multiple sizes for those who enjoy the characters and artwork presented in the game (see below). The Search For Wondla The object of the game is to find the differences between two similar looking images based on artwork from The Search For Wondla by Tony DiTerlizzi. Are you ready to join Eva Nine in her quest to find other humans in the future? Note: There is a version available for those who would like to play The Search For Wondla on their iPads! The first game has 28 levels of difference finding goodness for you to work through. Each level will list the minimum number of differences that you need to find to progress to the next level. If you need a hint along the way just click on the Shake or Reveal options at the bottom of the game play window. Get a level completed quickly enough and you get bonus points! There will also be differences in the images for individual levels each time you play the game, so have fun! Note: The second game has 12 levels to complete. To give you a good feel for the game we have covered the first six levels here and provided seven clues for each level (you are only required to find a minimum of five). Eva Nine viewing the holographic outdoor projections in the main hub of her living quarters… Eva Nine is in a grumpy mood as Muthr visits her at bedtime… Eva Nine in her secret hideaway visiting old “childhood friends” as she contemplates her recent survival test failure. Eva Nine viewing the entire set of floor plans for the underground sanctuary where she was born and has been growing up. Eva Nine’s escape to the surface as the underground sanctuary is attacked by the bounty hunter creature Besteel. Eva Nine on the surface for the first time in her young life. Will she be successful in her quest? There is only one way to find out! Play The Search For Wondla Part 1 Play The Search For Wondla Part 2 Bonus Content If you have enjoyed this game you can learn more about the book and download the three wallpapers shown here by visiting the link below! Note: The wallpapers come in the following sizes: 1024*768, 1280*800, 1280*1024, 1440*900, iPhone, iPhone4, and iPad (click on the Extras link at the bottom of the page). Visit the Search For Wondla Homepage Do you enjoy playing difference finding games? Then you will definitely want to have a look at another wonderful game that we have covered here: Friday Fun: Isis Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Taking Your Business Scorecard Golfing

    - by tobyehatch
    Our workplace world is definitely changing. Not only are we taking work home, but we are working during odd hours in some very strange places.  I had the pleasure of interviewing Jacques Vigeant, Product Strategy Manager for Oracle Business Intelligence and Enterprise Performance Management, on a Podcast, and he enlightened me about how our mobile devices and business scorecards are enabling us to be more accountable and keep a watchful eye on business – even while on the golf course.Business scorecards have been around for many years - so I asked Jacques if he felt they had changed significantly due to technology. His answer was, “Yes, and no.”  Jacques agreed that scorecard enthusiasts are still passionate about executing the company strategy and monitoring Key Performance Indicators (KPIs), but scorecards and Business Intelligence (BI) as a whole have changed.  He explained that five to six years ago, people did BI work at the office and, for the most part, disconnected from their computer and workplace when they went home – with the exception of checking email and making a phone call or two. But now, that is no longer the case. People are virtually always connected with work and, more importantly, expect their BI and scorecards to be ‘always on,’ regardless of whether they are at their desk or somewhere else.Basically, the BI paradigm has changed from a 'pull' model, where employees are at their desks querying or pulling information from the system, to a 'push' model where employees expect their BI and scorecard systems to reach out (or push information) to them when there is something of note to learn or something on which they need to take action. I found this very interesting. However mobile devices do have their limitations with respect to screen sizes – does it really make sense to look at your strategy/scorecard on tiny devices? What kind of scorecard activities can you really expect to be able to do? Jacques’ answer was very logical. “When you think of a scorecard, it is really comprised of an organization of KPIs that are aligned with the strategic objectives of your company. KPIs are the heart of how you will execute your strategy. So, if you decompose that a little more, each KPI is well defined with the thresholds that you should keep an eye on and who is responsible for them. When we talk about scorecarding on a phone, we aren’t talking about surfing the strategy and exploring the strategy map like we do on the desktop. In a scorecarding context, we use the phone more as an alerting mechanism or simple monitoring device for your KPIs.”Jacques gave a great example of an inventory manager who took part of an afternoon off to go golfing before winter finally hit, and while on the front nine holes, his phone vibrated. His scorecard was alerting him that the inventory levels for one of the products was below some threshold that he had set.  From his phone, he had set up three options within Oracle Scorecard and Strategy Management (OSSM) for this type of situation:  1. Contact the warehouse manager directly by phone and work it out (standard phone function)  2. Tap/hold the KPI and add an annotation to the KPI in OSSM using the dictation capabilities of the phone and deal with it more fully when he gets back to the office  3. Tap/hold the KPI and invoke a business process from OSSM to transfer product from another warehouse with higher stock levels to the one that needs it  Being on a phone should still give you options to quickly deal with situations as needed, but mobile phones are not designed for nor should try to replicate the full desktop experience. We covered other interesting subjects in the interview, including how Oracle is keeping pace with mobile innovation and new devices such as Google Glasses, Galaxy Gear, Pebble Watches and more, and how Oracle is handling mobile security– which is great news for our mobile workforce. To listen to the entire Podcast, click here.To learn more about Oracle Scorecard and Strategy Management, click here.

    Read the article

  • Map/Reduce on an array of hashes in CouchDB

    - by sebastiangeiger
    Hello everyone, I am looking for a map/reduce function to calculate the status in a Design Document. Below you can see an example document from my current database. { "_id": "0238f1414f2f95a47266ca43709a6591", "_rev": "22-24a741981b4de71f33cc70c7e5744442", "status": "retrieved image urls", "term": "Lucas Winter", "urls": [ { "status": "retrieved", "url": "http://...." }, { "status": "retrieved", "url": "http://..." } ], "search_depth": 1, "possible_labels": { "gender": "male" }, "couchrest-type": "SearchTerm" } I'd like to get rid of the status key and rather calculate it from the statuses of the urls. My current by_status view looks like the following: function(doc) { if (doc['status']) { emit(doc['status'], null); } } I tried some things but nothing actually works. Right now my Map Function looks like this: function(doc) { if(doc.urls){ emit(doc._id, doc.urls) } } And my Reduce Function function(key, value, rereduce){ var reduced_status = "retrieved" for(var url in value){ if(url.status=="new"){ reduced_status = "new"; } } return reduced_status; } The result is that I get retrieved everywhere which is definitely not right. I tried to narrow down the problem and it seems to be that value is no array, when I use the following Reduce Function I get length 1 everywhere, which is impossible because I have 12 documents in my database, each containing between 20 to 200 urls function(key, value, rereduce){ return value.length; } What am I doing wrong? (I know I want you to write code for me and I'm feeling guilty, but right now I do the calculation of the statuses in ruby after getting the data from the database. It would be nice to already get the right data from the database)

    Read the article

  • PHP's SimpleXML: How to use colons in names

    - by nute
    I am trying to generate an RSS Google Merchant, using SimpleXML. The sample given by Google is: <?xml version="1.0"?> <rss version="2.0" xmlns:g="http://base.google.com/ns/1.0"> <channel> <title>The name of your data feed</title> <link>http://www.example.com</link> <description>A description of your content</description> <item> <title>Red wool sweater</title> <link> http://www.example.com/item1-info-page.html</link> <description>Comfortable and soft, this sweater will keep you warm on those cold winter nights.</description> <g:image_link>http://www.example.com/image1.jpg</g:image_link> <g:price>25</g:price> <g:condition>new</g:condition> <g:id>1a</g:id> </item> </channel> </rss> My code has things like: $product->addChild("g:condition", 'new'); Which generates: <condition>new</condition> I read online that I should instead use: $product->addChild("g:condition", 'new', 'http://base.google.com/ns/1.0'); Which now generates: <g:condition xmlns:g="http://base.google.com/ns/1.0">new</g:condition> This seems very counter-intuitive to me, as now the "xmlns" declaration is on almost EVERY line of my RSS feed intead of just once in the root element. Am I missing something?

    Read the article

  • Is ther any tool to extract keywords from a English Text or Article In Java?

    - by user555581
    Dear Experts, I am trying to identify the type of the web site(In English) by machine. I try to download the homepage of the web iste, download html page, parsing and get the content of the web page. Such as here are some context from CNN.com. I try to get the keywords of the web page, mapping with my database. If the keywords include like news, breaking news. The web site will go to the news web sites. If there exist some words like healthy, medical, it will be the medical web site. There exist some tools can do the text segmentation, but it is not easy to find a tool do the semantic, such as online shopping, it is a keywords, should not spilt two words. The combination will be helpful information. But "oneline", "shopping" will be less useful as it may exist online travel... • Newark, JFK airports reopen • 1 runway reopens at LaGuardia Airport • Over 4,155 flights were cancelled Monday • FULL STORY * LaGuardia Airport snowplows busy Video * Are you stranded? | Airport delays * Safety tips for winter weather * Frosty fun Video | Small dog, deep snow Latest news * Easter eggs used to smuggle cocaine * Salmonella forces cilantro, parsley recall * Obama's surprising verdict on Vick * Blue Note baritone Bernie Wilson dead * Busch aide to 911: She's not waking up * Girl, 15, last seen working at store in '90 * Teena Marie's death shocks fans * Terror network 'dismantled' in Morocco * Saudis: 'Militant' had al Qaeda ties * Ticker: Gov. blasts Obama 'birthers' * Game show goof is 800K mistakeVideo * Chopper saves calf on frozen pondVideo * Pickpocketing becomes hands-freeVideo * Chilean miners going to Disney World * Who's the most intriguing of 2010? * Natalie Portman is pregnant, engaged * 'Convert all gifts from aunt' CNNMoney * Who controls the thermostat at home? * This Just In: CNN's news blog

    Read the article

  • XMPP4R Callbacks dont seem to work

    - by Sid
    Im using xmpp4r and trying to get the hang of a basic chat feature that I wish to implement later in my Rails app. My fundamentals on Ruby Threads is still a bit shaky so I would appreciate any help on this. Though I register the callback i dont get a response from my gmail account. I am able to send a message but my ruby program terminates. In order to prevent it from terminating I tried to stop on of the threads in the program but I cant seem to get it working. require 'rubygems' require "xmpp4r/client" require "xmpp4r/roster" include Jabber def connect client = Client.new(JID::new("[email protected]")) client.connect client.auth("test") client.send(Presence.new.set_type(:available)) client end def create_message(message, to_email) msg = Jabber::Message::new(to_email, message) msg.type = :chat msg end def subscribe(email_id) pres = Presence.new.set_type(:subscribe).set_to(email_id) pres end client = connect roster = Roster::Helper.new(client) roster.add_subscription_request_callback do |item,pres| roster.accept_subscription(pres.from) end def create_callback(client) $t4= Thread.new do client.add_message_callback do |m| puts m.body puts "................................Callback working" end end end puts "Client has connected" msg = create_message("Welcome to the winter of my discontent", "[email protected]") client.send(msg) create_callback(client) def check(client) $t3 = Thread.new do loop do puts "t3 still running........." Thread.current.stop $t4.join end end end check(client)

    Read the article

  • Efficient way to store order in mySQL for list of items

    - by ninumedia
    I want to code cleaner and more efficiently and I wanted to know any other suggestions for the following problem: I have a mySQL database that holds data about a set of photograph names. Oh, say 100 photograph names Table 1: (photos) has the following fields: photo_id, photo_name Ex data: 1 | sunshine.jpg 2 | cloudy.jpg 3 | rainy.jpg 4 | hazy.jpg ... Table 2: (categories) has the following fields: category_id, category_name, category_order Ex data: 1 | Summer Shots | 1,2,4 2 | Winter Shots | 2,3 3 | All Seasons | 1,2,3,4 ... Is it efficient to store the order of the photos in this manner per entry via comma delimited values? It's one approach I have seen used before but I wanted to know if something else is faster in run time. Using this way I don't think it is possible to do a direct INNER JOIN on the category table and photo table to get a single matched list of all the photographs per category. Ex: Summer shots - sunshine.jpg, cloudy.jpg, hazy.jpg because it was matched against 1,2,4 The iteration through all the categories and then the photos will have a O(n^2) and there has to be a better/faster way. Please educate me :)

    Read the article

  • Drupal module for complex hours of operation / office hours

    - by Eronarn
    Background: I am building a website in Drupal that links together a wide variety of social service providers for the purposes of discovery, collaboration, and all that good stuff. The goal is to make a website that is simple to browse for consumers of these services and simple to update for providers of these services. The beta has been very well received, but I want to switch to a different information schema before the site goes live. Specific question: I am looking for a module (or other solution) that... Stores this data in Drupal (i.e., no GCal) Supports a wide variety of repeats Is intuitive for people editing the node (no Cron-style interfaces, please!) I have looked into several modules on drupal.org and none seem to meet all of these criteria. I've also searched here, and while this question is similar: http://stackoverflow.com/questions/2794149/drupal-create-a-node-with-employee-working-hours my needs are too complex for the offered solution. Some of these providers have "hours" such as "the third Wednesday of every month", or "open during Winter months", or separate hotline & office hours. Likewise, the Date Repeat module doesn't cut it as stands currently. I'm comfortable hacking what I need into an existing module - I just don't want to duplicate effort! If you have a suggestion on what module might be a good starting point, I'd appreciate that input, too. Thanks. <3

    Read the article

  • MBPro, mid 2010 can't see Dlink DIR655 signal after sleep etc

    - by user88114
    This is my son's MBP 7,1 running Snow Leopard 10.6.7. Router signal is fine since iPad, Wintel on same table 20 feet from router are fine. the MBP however frequently wakes and fails to find the internet. iStumbler can see 1 neighbours hub and my garden hub are there but can't get to the normal DIR655 wifi... no ping no en0 or en1 device seems to exist. Airport off and on does not help. He just resets router and it all works but this does not please me! I must admit the winter sometimes seems to loose connect too, but less so. The DIR655 (hardware rev A3) is on the original EU firmware 1.10, I'm cautious about jumping to latest 1.31EU since no downgrade seems to be possible and that feels a bit risky as so much is set up and working fine. If I use the DIR655 admin web and release the lease the MBP has then wake it all worked OK. So I suspect lease timing/locking issue but unsure how to check up, plus why iStumbler seems to say the network is not visible at all when I sit on the iPad right next to it just fine.. I do not think there are any channel overlaps and we also have RFquiet DECT phones (Orchid) that are silent until lifted or called. Anyway signals all show low interference and high throughput except for this failure to connect. Just walked the MBP to the garden office and iStumbler now sees the more distant DIR655 signal although it will not connect to it (does not show under Sys Prefs NetNetwork names) even after airport off & on... It also refuses to connect to my garden network (an old Belkin acting as AP wired to DIR655), the signal it can see and even net name in Sys Prefs NetNetwork names (2 mins later):NOW both names ARE visible, but both fail to accept the correct WPA2 password and keep asking again after failing to connect. IT ALL MAKES NO SENSE TO ME. Just revoked the lease for the MBP on DIR655 and no changes although this seemed to help MBP wake into connection 1 hour ago. OK a bit of walking about to report. Carried MBP across garden towards DIR655, a few other wifis show up on iStumbler, low signals all channel 1. Right next to DIR655 but iStumbler not showing it, although most other wifi's have gone. I'd say iStubler is suffering timeouts&hangs but hard to be sure. Lots of attempts to Airport on/off, join other etc and suddenly I get to connect, get given new IP (I revoked), can browse. Walk away, connection drops quite soon at 30 feet then reconnected briefly then died again. MUST ATTEND ELSEWHERE FOR A BIT...

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Adatbázis szerver konszolidáció Oracle technológiákkal - eroforrás allokálás

    - by Lajos Sárecz
    Szerver konszolidációnál alapmegoldás a virtualizáció, pedig az Oracle Database rendelkezik olyan képességekkel, melyekkel a virtualizáció elonyeit élvezhetjük, ám teljesítményben felülmúljuk azt. Több adatbázis konszolidációját meg lehet oldani egy nagy szerveren, vagy egy több szerverbol álló klaszteren. Bármelyik megoldást is választjuk (ezek elonyeivel és hátrányaival most nem foglalkozok), az egyik legfontosabb megoldandó probléma, hogy biztonsággal el tudjuk oket szeparálni akár adatbiztonsági, akár eroforrás kezelési szempontból. A szoftveres és hardveres virtualizációk lehetové teszik, hogy a szerver eroforrásait több virtuális szerver között felosszuk, ezáltal elszeparálhatók a párhuzamosan futó adatbázis példányok. Ezek a megoldások általában költségesek, plusz adminisztrációt jelentenek és teljesítmény csökkenést okoznak. Az alábbiakban röviden összeszedem, hogy az Oracle Database milyen eroforrás szeparációs technológiákkal rendelkezik, melyek jól használhatók adatbázis konszolidáció esetén: Adatbázis szolgáltatások: Azt talán minden Oracle adatbázis-kezelovel foglalkozó szakérto tudja, hogy akliensek az adatbázist az adatbázis szolgáltatás nevével érik el. Alapértelmezetten minden adatbázis egyetlen szolgáltatással rendelkezik, mely automatikusan a 'global database name' paraméterrel megegyezo nevet kapja az adatbázis létrehozásakor. Ugyanakkor egy adatbázishoz több szolgáltatás név is rendelheto. A szolgáltatásokkal csoportosíthatók a különbözo feladatokat végrehajtó kliensek, és a szolgáltatásokhoz rendelhetjük hogy melyik kliens csoportnak mennyi rendszer eroforrást allokálunk. Klaszteres adatbázisok (RAC) esetén egy szolgáltatás több adatbázis példányhoz (szerverhez) kapcsolódhat, amivel valós terheléstol függo terhelés elosztás valósítható meg (itt már szerepet kap egyébként a Resource Manager is, lásd késobb). Az alkalmazás számára irrelevánssá válik, hogy az adott szolgáltatást mely szerver szolgálja ki. A szolgáltatásokhoz kapcsolódó eroforrások menet közben dinamikusan bovíthetok, de kezelik a kieso eroforrások hiányát is (failover). Database Resource Manager: Az Oracle Database Resource Manager az adatbázis szintjén kezeli az eroforrásokat, a CPU használatot szabályozza az adatbázis terhelés kontrolljával. A Resource Manager egy CPU-n adott pillanatban csak egyetlen Oracle processz futtatását engedélyezi, miközben a többit várakoztatja (ahogy az egy operációs rendszer ütemezojében is muködik). A Resource Manager csak akkor lép muködésbe, amikor a CPU terhelése eléri a 100%-ot. Ekkor a Resource Plan-nek megfeleloen korlátozhatja az egyes eroforrás csoportok számára elérheto eroforrás (CPU) mennyiségét. Instance Caging: A Resource Manager részeként az Oracle Database 11gR2-tol elérheto Instance Caging technológiával virtualizáció és operációs rendszer szintu eroforrás felosztás nélkül az adatbázis példány szintjén lehet szabályozni az allokált CPU számot. Erre akkor lehet szükség, ha egy szerveren több példány futtatására van szükség. A Resource Manager bekapcsolásával és a cpu_count paraméter beállításával lehet adatbázis példányonként aktiválni az Instance Caging funkcionalitást. A cpu_count egy dinamikus paraméter, célszeru arra az értékre állítani, ahány CPU-t az adott adatbázis példány maximálisan igényelhet. Lehetoség van túlméretezni a példányok számára rendelkezésre álló processzorok számát. Például egy 4 CPUs- szerver esetében ha van 3 példányunk, mindháromnak adhatunk 3 CPU-t. Azonban ha mindegyik terhelés alatt van, akkor a példány számára maximum allokált CPU szám osztva összes allokált CPU számmal arányban részesül a processzorból, ami a példában 33,33%, azaz 1,33 CPU. Input Output Resource Manager (IORM):Nem csak a processzorok használatát szabályozhatjuk, lehetoség van a megosztott storage eroforrásainak felosztására is. Az Input Output Resorce Manager (IORM) alkalmazásával storage szinten tudjuk szabályozni az adatbázisok közötti és azokon belüli minimális I/O szinteket. Database Vault: Ugyanazon adatbázisba konszolidált alkalmazások esetén a rendszergazda szerepkörök szeparálása lehetséges az Oracle Database Vault technológiával. Ezzel elérheto az, hogy biztonságosan konszolidáljuk adatbázisainkat úgy, hogy minden adminisztrátor csak a hozzá tartozó adatokat, objektumokat lássa, módosíthassa.

    Read the article

  • How to fix: Ubuntu 12.04 reboots after loading with elilo

    - by Casey
    I have an HP p6-2120 with CPU: AMD A6-3620 APU with Radeon Graphics RAM: 6GB BIOS: HO2_710.ROM v7.10 [AMI v7.10 4/19/2012] Disk: SATA1 (/dev/sda) - 1 TB (windows) Disk: SATA2 (/dev/sdb) - 1 TB partitioned using "parted -a optimal /dev/sdb" as follows: .. 1049KB 201MB FAT32 boot flag set .. 201MB 60GB ext2 (/) .. 68GB 78GB linux-swap(v1) (swap) .. 78GB 790GB ext4 (/home) .. - rest is "free" space reserved for other purposes (eventually) ubuntu: 12.04.1 LTS [specifically: Release 12.04 (precise) 64-bit] kernel: linux 3.2.0-29-generic I created a bootable EFI USB from the ISO (64-bit) which I downloaded. I can run and install from the USB without any problems. The BIOS is an EFI bios that appears to be capable of booting in either EFI or Legacy mode. Initially, I did the "standard" install with NOTHING on disk2, and let the installer configure everything. The net result of this was that when I started the computer and forced it into "boot" menu mode, it DOES NOT recognize SATA2 as an EFI drive, and when I attempt to "legacy" boot from it, I get the message "ERROR: No Boot Disk has been detected." The "standard" install created one large partition that consumed the entire disk. At that point, I manually partitioned the disk (using sudo parted -a optimal /dev/sdb) as described above. I selected the "other" install, and changed the /dev/sdb1 to "bios_grub", /dev/sdb2 as "/" (ext4), /dev/sdb3 as swap, and /dev/sdb4 as "/home". [Note: fearing that possibly elilo did not recognize ext4, I switched /dev/sdb2 to ext2 and re-insalled] The net result was that the install appeared to trash the /dev/sdb1 partition so that it was NOT readable by anything. I re-formated /dev/sdb1 as FAT32 and set the boot flag. I repeated the install ignoring the messages about no bios_grub partition. After several attempts to get GRUB2 to work, I switched to elilo. I downloaded the most recent version and copied it (elilo-3.14-ia64.efi) to /dev/sdb1/efi/boot/bootx64.efi. (The BIOS boot loader did not recognize it either as elilo-3.14.ia64.efi or as elilo.efi. Based on the advice in one of the web-pages I found, I renamed it to bootx64.efi. This worked.) In that same directory (/efi/boot), I copied the file pointed to the link in /dev/sdb2/vmlinuz to /efi/boot/vmlinuz, and the file pointed to the link in /dev/sdb2/initrd.img to /efi/boot/initrd.img. I created an elilo.conf file as follows: timeout=5000 prompt default=linux-boot image=vmlinuz label=linux-boot read-only initrd=initrd.img root=/dev/sdb2 The /efi/boot directory contains 4 files: bootx64.efi elilo.conf vmlinuz initrd.img When I power-cycle the computer and force the boot menu, drive2 shows up as an EFI bootable drive. When I select it, I get the elilo prompt. Pressing , it appears to load the kernal (I have tried it with verbose=5, and there is a long string of messages with the final one a command line to load the kernel and a series of several dots that fly by) then the screen goes blank, and it reboots the computer. [Note: I have also tried substituting the UUID as found in the /etc/fstab of the installed system for the root directory. This had no effect.] This is a brief synopsis of several nights of fiddling with this. I would deeply appreciate any help you can give.

    Read the article

  • Playing video from URL -Android

    - by Rajeev
    In the following code what ami doing wrong the video dosnt seem to play here.Is that the permission issue if so what should be included in the manifest file this main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:baselineAligned="true"> <LinearLayout android:layout_height="match_parent" android:layout_width="wrap_content" android:id="@+id/linearLayout2"></LinearLayout> <MediaController android:id="@+id/mediacnt" android:layout_width="wrap_content" android:layout_height="wrap_content"></MediaController> <LinearLayout android:layout_height="match_parent" android:layout_width="wrap_content" android:id="@+id/linearLayout1" android:orientation="vertical"></LinearLayout> <Gallery android:layout_height="wrap_content" android:id="@+id/gallery" android:layout_width="wrap_content" android:layout_weight="1"></Gallery> <VideoView android:layout_height="match_parent" android:id="@+id/vv" android:layout_width="wrap_content"></VideoView> </LinearLayout> this is java class package com.gallery; import java.net.URL; import android.app.Activity; import android.app.Activity; import android.net.Uri; import android.os.Bundle; import android.widget.MediaController; import android.widget.Toast; import android.widget.VideoView; public class GalleryActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Toast.makeText(GalleryActivity.this, "Hello world", Toast.LENGTH_LONG).show(); VideoView videoView = (VideoView) findViewById(R.id.vv); MediaController mediaController = new MediaController(this); mediaController.setAnchorView(videoView); // Set video link (mp4 format ) Uri video = Uri.parse("http://www.youtube.com/watch?v=lEbxLDuecHU&playnext=1&list=PL040F3034C69B1674"); videoView.setMediaController(mediaController); videoView.setVideoURI(video); videoView.start(); } }

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • getting hasMany records of a HABTM relationship

    - by charliefarley321
    I have tables: categories HABTM sculptures hasMany images from the CategoriesController#find() produces an array like so: array( 'Category' => array( 'id' => '3', 'name' => 'Modern', ), 'Sculpture' => array( (int) 0 => array( 'id' => '25', 'name' => 'Ami', 'material' => 'Bronze', 'CategoriesSculpture' => array( 'id' => '18', 'category_id' => '3', 'sculpture_id' => '25' ) ), (int) 1 => array( 'id' => '26', 'name' => 'Charis', 'material' => 'Bronze', 'CategoriesSculpture' => array( 'id' => '19', 'category_id' => '3', 'sculpture_id' => '26' ) ) ) ) I'd like to be able to get the images related to sculpture in the array as well if this is possible?

    Read the article

  • Java Play Mustache NPE Error

    - by zanedev
    We are getting a mustache play error in production (amazon linux EC2 AMI) but not in development (MACs) and we have tried upgrading the jvm, using the jdk instead, and changing from a tomcat deploy model to match our development environments as much as possible but nothing is working. Please any help would be greatly appreciated. We have lots of shared code in java and javascript using mustache and it would be a big deal to rewrite everything if we had to ditch mustache on the java side. 20:48:52,403 ERROR ~ @6al2dd0po Internal Server Error (500) for request GET /mystuff/people Execution exception (In {module:mustache-0.2}/app/play/modules/mustache/MustacheTags.java around line 32) NullPointerException occured : null play.exceptions.JavaExecutionException at play.templates.BaseTemplate.throwException(BaseTemplate.java:90) at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:257) at play.templates.Template.render(Template.java:26) at play.templates.GroovyTemplate.render(GroovyTemplate.java:187) at play.mvc.results.RenderTemplate.<init>(RenderTemplate.java:24) at play.mvc.Controller.renderTemplate(Controller.java:660) at play.mvc.Controller.renderTemplate(Controller.java:640) at play.mvc.Controller.render(Controller.java:695) at controllers.MyStuff.people(MyStuff.java:183) at play.mvc.ActionInvoker.invokeWithContinuation(ActionInvoker.java:548) at play.mvc.ActionInvoker.invoke(ActionInvoker.java:502) at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:478) at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:473) at play.mvc.ActionInvoker.invoke(ActionInvoker.java:161) at Invocation.HTTP Request(Play!) Caused by: java.lang.NullPointerException at play.modules.mustache.MustacheTags._template(MustacheTags.java:32) at play.modules.mustache.MustacheTags$_template.call(Unknown Source) at /app/views/User/people.html.(line:22) at play.templates.GroovyTemplate.internalRender(GroovyTemplate.java:232) ... 13 more

    Read the article

  • Should I use a regular server instead of AWS?

    - by Jon Ramvi
    Reading about and using the Amazon Web Services, I'm not really able to grasp how to use it correctly. Sorry about the long question: I have a EC2 instance which mostly does the work of a web server (apache for file sharing and Tomcat with Play Framework for the web app). As it's a web server, the instance is running 24/7. It just came to my attention that the data on the EC2 instance is non persistent. This means I lose my database and files if it's stopped. But I guess it also means my server settings and installed applications are lost as they are just files in the same way as the other data. This means that I will either have to rewrite the whole app to use amazon CloudDB or write some code which stores the db on S3 and make my own AMI with the correct applications installed and configured. Or can this be quick-fixed by using EBS somehow? My question is 1. is my understanding of aws is correct? and 2. is it's worth it? It could be a possibility to just set up a regular dedicated server where everything is persistent, as you would expect. Would love to have the scaleability of aws though..

    Read the article

  • Retrieving license type (linux/windows/windows+sqlserver) for an Amazon EC2 instance via the API?

    - by Geir
    I need to calculate the hourly running costs for my Amazon EC2 instances. This varies even between instances with same hardware configs (instance types) because I use different amazon images (AMIs): some plain windows server and some windows server with sql server (both of them have additional costs compared with plain linux instances) The EC2 Java API has a describeInstances() method which returns Instance objects with metadata such as instance id, instance type (m1.small/large...), state (running,stopped..) public ip, etc. This Instance object also has a .getLicense().getPool() which according to the Java API should return "The license pool from which this license was used (ex: 'windows')." I thought this is were it may also give 'windows+sqlserver' or something to that effect. The getLicense() method does however return null.. I've navigated around the EC2 web console, not being able to find this information, but I'm hoping that it is possible - otherwise it would mean that you cannot identify the true hourly cost of an particular instance unless you know which AMI was used to create it in the first place (plain windows server or windows server with sql server). Anyone? Thanks :) /Geir

    Read the article

  • Managing server instance identity on EC2

    - by kikibobo
    I recently brought up a cluster on EC2, and I felt like I had to invent a lot of things. I'm wondering what kinds of tools, patterns, ideas are out there for how to deal with this. Some context: I had 3 different kinds of servers, so first I created AMIs for each of them. The first AMI had zookeeper, so step one in deploying the system was to get the zookeeper server running. My script then made a note of the mapping between EC2's completely arbitrary and unpredictable hostnames, and the zookeeper server. Then as I brought up new instances of the other 2 kinds of servers, the first thing I would do is ssh to the new server, and add the zookeeper server to its /etc/hosts file. Then as the server software on each instance starts up, it can find zookeeper. Obviously this is a problem that lots of people have to solve, and it probably works a little bit differently in different clouds. Are there products that address this concept? I was pretty surprised that EC2 didn't provide some kind of way to tie your own name to its name. Thanks for any ideas.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Nginx1.6.1: Installing from source not running corectly

    - by Maca
    I just installed Nginx 1.6.1 from source but it didn't seem to installed correctly. Nginx is running if I service nginx status but when I do nginx -v it outputs command not found. Regular HTML page shows fine and there is no error in the error logs. I am on AWS Ec2 linux AMI. Here is my /etc/init.d/nginx script #!/bin/sh # # nginx - this script starts and stops the nginx daemon # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # config: /etc/sysconfig/nginx # pidfile: /usr/local/nginx/logs/nginx.pid # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx lockfile=/usr/local/nginx/logs/nginx.lock make_dirs() { # make required directories user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -` options=`$nginx -V 2>&1 | grep 'configure arguments:'` for opt in $options; do if [ `echo $opt | grep '.*-temp-path'` ]; then value=`echo $opt | cut -d "=" -f 2` if [ ! -d "$value" ]; then # echo "creating" $value mkdir -p $value && chown -R $user $value fi fi done } start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 make_dirs echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop sleep 1 start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac

    Read the article

< Previous Page | 12 13 14 15 16 17 18  | Next Page >