Search Results

Search found 7517 results on 301 pages for 'fast debugger'.

Page 241/301 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Managing game state / 'what to update' within an XNA game 'screen'

    - by codinghands
    Note - having read through other GDev questions suggested when writing this question I'm confident this isn't a dupe. Of course, it's 3am and I'm likely wrong, so please mod as such if so. I'm trying to figure out how best to manage state within my game screens - please bare with me though! At the moment I'm using a heavily modified version of the fantastic game state management example on the XNA site available here. This is working perfectly for my 'Screens' - 'IntroScreen' with some shiny logos, 'TitleScreen' and a 'MenuScreen' stacked on top for the title and menu, 'PlayScreen' for the actual gameplay, etc. Each screen has the a bunch of sprites, and an 'Update' and 'Draw', managed by a 'ScreenManager'. In addition to the above, and as suggested as an answer to my other question here, most screens have a 'GameProcessQueue' class full of 'GameProcess'es which lets me do just about anything (animations, youbetcha!), in any order, in sequence or parallel. Why mention all this? When I talk about managing game state I'm thinking more for complex scenarios within a 'Screen'. 'TitleScreen', 'MenuScreen' and the like are all relatively simple. 'Play Screen' less so. How do people manage the different 'states' within the screen (or whatever you call it) that 'does' gameplay? (for me, the 'PlayScreen') I've thought about the following: Enum of different states in the Screen, 'activeState' enum-type variable, switching on the enum in the Screen Update() loop to determine what Screen Update 'sub'-function is called. I can see this getting hairy pretty fast though as screens get more complex and with the 'PlayScreen' becoming a behemoth mega-class. 'State' class with Update loop - a Screen can have any number of 'States', 1+ of which are 'active'. Screen update loop calls update on all active states. States themselves know which screen they belong to, and may even belong to a 'StateManager' which handles transitioning from one state to the next. Once a state is over it's removed from the ScreenState list. The Screen doesn't need a bunch of GameProcessQueues, each State has its own. Abstract Screen further to be more flexible - I can see the similarities between what I've got (game 'Screens' handled by a ScreenManager) and what I want (states within a screen, and a mechanism to manage them). However at the moment I see 'Screens' as high level and very distinct ('PlayScreen' with baddies != 'MenuScreen' with 4 words and event handlers), where as my proposed 'States' are more intrinsically tied to a specific screen with complex requirements. I think. This is for a turn-based board game, so it's easier to define things as a discrete series of steps (IntroAnimation - P1Turn - P2Turn - P1Turn ... - GameOver - .... Obviously with an open-world RPG things are very different, but any advice in this scenario is appreciated. If I'm just going OOP-crazy please say so. Similarly I'm concious there's a huge amount on this site re: state management. But as my first 'serious' game after a couple of false starts I'd like to get this right, and would rather be harassed and modded down than never ask :)

    Read the article

  • Apprentice Boot Camp in South Africa (Part 2)

    - by Tim Koekkoek
    By Maximilian Michel (DE), Jorge Garnacho (ES), Daniel Maull (UK), Adam Griffiths (UK), Guillermo De Las Nieves (ES), Catriona McGill (UK), Ed Dunlop (UK) Today we have the second part of the adventures of seven apprentices from all over Europe in South-Africa!  Kruger National Park & other experiences Going to the Kruger National Park was definitely an experience we will all remember for the rest of our lives. This trip,organised by Patrick Fitzgerald, owner of the Travellers Nest (where we all stayed), took us from the hustle and bustle of Joburg to experience what Africa is all about, the wild! Although the first week’s training we had prior to this trip to the Kruger was going very well, we all knew this was to be a very nice break before we started the second week of training. And we were right, the animals, scenery and sights we saw were just simply incredible and like I said something we will remember for the rest of our lives. To see lions, elephants, cheetahs and rhinos and many more in a zoo is one thing, but to see them in the wild, in their natural habitat is very special and I personally only realised this from the early 5 am start on the first morning in the Kruger, which was definitely worth it. Not only was it all about the safari, we ate some wonderful food, in particular on the Saturday night, Patrick made us a traditional South African Braai which was one of my favourite meals of the whole two weeks. After the Kruger National Park we had a whole day of traveling back to Johannesburg but even this was made to be a good day by our hosts. Despite the early start on the road it was all worth it by the time we reached God's Window. The walk to the top was made a lot harder by all the steaks we had eaten in the first week but the hard walk was worth it at the top, with views that stretched for miles. The Food The food in South Africa is typically meat and in big amounts, while there we ate a lot of big beef steaks, ribs and kudu sausage. All of the meat we ate was usually cooked with a sauce such as a Barbeque glaze. The restaurants we visited were: Upperdeck Restaurant, with live music and a great terrace to eat, the atmosphere was good for enjoying the music and eating our food. Most of us ate  Spare ribs that weighed 600 kg, with barbecue sauce that was delicious. Die Bosvelder Pub & Restaurant is a restaurant with a very surprising decor, this is because the walls had many of south Africa’s famous animals on them. The food was maybe the best we ate in South Africa. Our orders were: Springbokvlakte Lambs' Neck Stew, beef in gravy and steaks topped with cheese and then more meat on top! All meals were accompanied by a selection of white sauce cauliflower, spinach and zanhorias. Pepper Chair Restaurant, where the specialty is T-Bone steaks of 1.4 kg, but most of us were happy to attempt the 1 kg. Cooked with barbecue sauce over the meat, it was very good!  The only problem was their size causing the  the meat to get cold if you did not eat it very fast! We’re all waiting for our 1.0 kg t-bone steak including our Senior Director EMEA Systems Support Germany & Switzerland: Werner Hoellrigl The Godfather Restaurant, the food here was more meat in abundance. We ate: great ribs, hamburgers, steaks and all accompanied with a small plates of carrot and sauteed spinach, very good. We had two great weeks in South-Africa! If you want to join Oracle, then check http://campus.oracle.com 

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • More Free Apps Bound for the Marketplace

    - by Scott Kuhl
    Microsoft has announced they are raising the limit of free applications a developer can submit from 5 to 100.  But what does that really mean? First, lets look at the reason for the limitation.  The iTunes Store and the Android Market both have a lot more applications available than the Windows Phone Marketplace.  But that says nothing about the quality of those applications.  I attended a couple of pre-launch events and Microsoft representatives were clearly told to send a message. We don’t want a bunch of junky applications that do nothing but spam the marketplace.  That was the reason for the 5 free application limit. Okay, so now what has the result been?  Well, there are still fart apps, but there is no sign of a developer flooding the marking with 1500 wallpaper applications or 1000 of the same application all pointed at different RSS feeds.   On the other hand there are developers who want to release real free apps but are constrained by the 5 app limit. So why did Microsoft change it’s mind?  Is it to get the count of applications up, or is to make developers happy?  Windows Phone Marketplace is growing fast but it’s a long way behind the other guys.   I don’t think Microsoft wants to have 100,000 apps show up in the next 3 months if they are loaded with copy cat apps.  Those numbers will get picked apart quickly and the press will start complaining about  the same problems the Android Market has.  I do think the bump was at developer request.  Microsoft is usually good about listening to developer feedback, but has been pretty slow about it at times.  And from a financial perspective, there will me more apps that Microsoft has to review that they will see no profit on.  At least not until they bake in a advertising model connected to Bing. Ultimately, what does this mean for the future? Well, there are developers out there looking to release more than 5 simple free apps, so I think we will see more hobby apps.  And there are developers out there trying to make money from advertising instead of sales, so I think we will see more of those also.  But the category that I think will grow the fastest is free versions of paid applications that are the same as the trial version of the application.  While technically that makes no sense, its purely a marketing move.  Free apps get downloaded a lot more than paid apps, even with a trial mode.  It always surprises me how little consumers are willing to spend on mobile apps.  How many reviews of applications have you seen that says something like “a bit pricey at $1.99”.  Really?  Have you looked at how much you spend on your phone and plan?  I always thought the trial mode baked into Windows Marketplace was a good idea.  So I’m not sure how the more open free market will play out. In the long run though, I won’t be surprised to see a Bing ad mobile ad model show up so Microsoft can capitalize on the more open and free Windows Marketplace. Bonus: The Oatmeal on How I Feel About Buying Apps

    Read the article

  • SQLAuthority News – Advantages of Distance Learning

    - by Pinal Dave
    Distance education is extremely popular – almost overnight, it seems.  Almost everyone has taken an online course, or knows someone who has, or is considering joining an online school.  There are many advantages and disadvantages to attending an online school – but the same can be said of attending a physical school!  Let’s take a look at the top reasons to use distance education. 1) Flexibility.  Physical universities are usually willing to make some concessions to student – like night classes, study hours, and online networks.  However, nothing is going to beat the flexibility of distance education.  You can attend classes and take notes anytime, anywhere, wearing anything you’d like! 2) Affordability.  We don’t need to get into hard numbers to understand how an expensive university can be.  Students are taking on more and more debt just to get an education.  Many of these fees pay for room, board, and facilities.   Distance education cuts out all these costs, and makes attending school much more affordable for the average student. 3) Try before you buy.  Did you know that the average college student changes his or her major 10 times before they graduate?  You can imagine that this kind of indecision plays a huge part in WHEN you graduate – not being able to make up your mind can cost you big bucks if you have to stay in school for extra years!  Distance education allows you to take different classes from a wide range of disciplines.  Do you want to study forensic science or English literature?  Now you don’t have to pay for classes you can’t afford just to find out. 4) Pace yourself.  Some students struggle in a traditional classroom setting – classes can be taught too fast, too slow, or there are too many distractions.  Distance education allows mature students to set the pace themselves.  They can rewatch lectures they didn’t catch the first time, or go through classes quickly if they are already familiar with the material – cutting out the chance of burning out or getting bored. 5) Lifelong learning.  Maybe you already have a degree, but would like to learn more about your field, or a related field, or maybe even about something completely unrelated – just because you are curious!  Distance education allows you to learn whatever you want ,whenever you want (and yes, wearing anything you’d like!). 6) Attend whatever college you want.  Because of the popularity of distance education, physical campuses are getting in on the game by offering online courses – often just uploaded versions of classes already taught at their campus.  Ever wanted to attend Harvard, but knew you couldn’t get in?  Take a class online!  Of course, you probably should not attempt to lie and say you have a Harvard degree, but Ivy League colleges are prestigious because they are the best in their field – take advantage of the best by taking an online course! I am a big believer in continuing education, whether it is online courses, returning to school, or even take informal classes online.  Distance education can be a great way to accomplish these goals and become a lifelong learner. My friends at provides training through virtual classrooms for students who want to avoid travelling. Distance learning course allows IT aspirants to connect with trainers using the internet.  I encourage everyone to check it out! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Hardware wireless switch has no effect after suspend and 13.10 upgrade

    - by blaineh
    This seems to be a fairly chronic problem, as shown by the following questions: How do I fix a "Wireless is disabled by hardware switch" error? Wireless disabled by hardware switch "Wireless disabled by hardware switch" after suspend and other hardware buttons ineffective - how can I solve this? but no good solutions have been found! Wireless works fine after a reboot, but after a suspend the hardware switch (for my laptop this is f12) has no effect on the wireless, it is just permanently off, and shows that it is with a red LED. All My rfkill list all reads: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Any combination with rfkill <un>block wifi doesn't work, although one time first blocking then unblocking actually turned it on again. sudo lshw -C network reads: *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:65:2e:3f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.11.0-12-generic firmware=N/A ip=155.99.215.79 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:90100000-9010ffff *-network DISABLED description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 02 serial: c8:0a:a9:89:b4:30 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:2000(size=256) memory:90010000-90010fff memory:90000000-9000ffff memory:90020000-9002ffff Also, adding a /etc/pm/sleep.d/brcm.sh file as recommended here simply prevents the laptop from suspending at all, which of course is no good. This question has an answer urging to install the original driver, but it wasn't an "accepted answer" so I'd rather not take a chance on it. Also I'll admit I'm a bit lost on that and would like help doing so with the specific information I've given. xev shows that no internal event is triggered for my wireless switch (f12), but other function keys also acting as hardware switches work fine. I would be happy to provide more information, so long as you're willing to help me find it for you! This is a very annoying bug. I have a Compaq Presario CQ62. Edit. I just tried to reload bios defaults (or something) as shown by this video. Didn't work. Edit. I tried the contents of this answer, and it didn't work. Edit. I made a pastebin of dmesg. I couldn't even begin to understand the contents. Edit. Output of lspci | grep Network: 02:00.0 Network controller: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.”

    Read the article

  • PASS summit 2013. We do not remember days. We remember moments.

    - by Maria Zakourdaev
      "Business or pleasure?" barked the security officer in the Charlotte International Airport. "I’m not sure, sir," I whimpered, immediately losing all courage. "I'm here for the database technologies summit called PASS”. "Sounds boring. Definitely a business trip." Boring?! He couldn’t have been more wrong. If he only knew about the countless meetings throughout the year where I waved my hands at my great boss and explained again and again how fantastic this summit is and how much I learned last year. One by one, the drops of water began eating away at the stone. He finally approved of my trip just to stop me from torturing him. Time moves as slow as a turtle when you are waiting for something. Time runs as fast as a cheetah when you are there. PASS has come...and passed. It’s been an amazing week. Enormous sqlenergy has filled the city, filled the convention center and the surrounding pubs and restaurants. There were awesome speakers, great content, and the chance to meet most inspiring database professionals from all over the world. Some sessions were unforgettable. Imagine a fully packed room with more than 500 people in awed silence, catching each and every one of Paul Randall's words. His tremendous energy and deep knowledge were truly thrilling. No words can describe Rob Farley's unique presentation style, captivating and engaging the audience. When the precious session minutes were over, I could tell that the many random puzzle pieces of information that his listeners knew had been suddenly combined into a clear, cohesive picture. I was amazed as always by Paul White's great sense of humor and his phenomenal ability to explain complicated concepts in a simple way. The keynote by the brilliant Dr. DeWitt from Microsoft in front of the full summit audience of 5000 deeply listening people was genuinely breathtaking. The entire conference throughout offered excellent speakers who inspired me to absorb the knowledge and use it when I got home. To my great surprise, I found that there are other people in this world who like replication as much I do. During the Birds of a Feather Luncheon, SQL Server MVP Ted Krueger was writing a script for replicating the food to other tables. I learned many things at PASS, and not all of them were about SQL. After three summits, this time I finally got the knack of networking. I actually went up and spoke to people, and believe me, that was not easy for an introvert. But this is what the summit is all about. Sqlpeople. They are the ones who make it such an exciting experience. I will be looking forward to the next year. Till then I have my notes and new ideas. How long was the summit? Thousands of unforgettable moments.

    Read the article

  • Planning in the Cloud - For Real

    - by jmorourke
    One of the hottest topics at Oracle OpenWorld 2012 this week is “the cloud”.  Over the past few years, Oracle has made major investments in cloud-based applications, including some acquisitions, and now has over 100 applications available through Oracle Cloud services.  At OpenWorld this week, Oracle announced seven new offerings delivered via the Oracle Cloud services platform, one of which is the Oracle Planning and Budgeting Cloud Service.  Based on Oracle Hyperion Planning, this service is the first of Oracle’s EPM applications to be to be offered in the Cloud.    This solution is targeted to organizations that are struggling with spreadsheets or legacy planning and budgeting applications, want to deploy a world class solution for financial planning and budgeting, but are constrained by IT resources and capital budgets. With the Oracle Planning and Budgeting Cloud Service, organizations can fast track their way to world-class financial planning, budgeting and forecasting – at cloud speed, with no IT infrastructure investments and with minimal IT resources. Oracle Hyperion Planning is a market-leading budgeting, planning and forecasting application that is used by over 3,300 organizations worldwide.  Prior to this announcement, Oracle Hyperion Planning was only offered on a license and maintenance basis.  It could be deployed on-premise, or hosted through Oracle On-Demand or third party hosting partners.  With this announcement, Oracle’s market-leading Hyperion Planning application will be available as a Cloud Service and through subscription-based pricing. This lowers the cost of entry and deployment for new customers and provides a scalable environment to support future growth. With this announcement, Oracle is the first major vendor to offer one of its core EPM applications as a cloud-based service.  Other major vendors have recently announced cloud-based EPM solutions, but these are only BI dashboards delivered via a cloud platform.   With this announcement Oracle is providing a market-leading, world-class financial budgeting, planning and forecasting as a cloud service, with the following advantages: ·                     Subscription-based pricing ·                     Available standalone or as an extension to Oracle Fusion Financials Cloud Service ·                     Implementation services available from Oracle and the Oracle Partner Network ·                     High scalability and performance ·                     Integrated financial reporting and MS Office interface ·                     Seamless integration with Oracle and non-Oracle transactional applications ·                     Provides customers with more options for their planning and budgeting deployment vs. strictly on-premise or cloud-only solution providers. The OpenWorld announcement of Oracle Planning and Budgeting Cloud Service is a preview announcement, with controlled availability expected in calendar year 2012.  For more information, check out the links below: Press Release Web site If you have any questions or need additional information, please feel free to contact me at [email protected].

    Read the article

  • Oracle Spatial and Graph – A year in review

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What a great year for Oracle Spatial! Or shall I now say, Oracle Spatial and Graph, with our official name change this summer. There were so many exciting events and updates we had this year, and this blog will review and link to some of the events you may have missed over the year. We kicked off 2012 with our webinar: Situational Analysis at OnStar with Oracle Spatial and Graph. We collaborated with OnStar’s Emergency Strategy and Outreach expert, Jeff Joyner ,on how Onstar uses Google Earth Visualization, NAVTEQ data and Oracle Database to deliver fast, accurate emergency services to its customers. In the next webinar in our 2012 series, Oracle partner TARGUSinfo showcased how to build a robust, scalable and secure customer relationship management systems – with built-in mapping and spatial analysis, and deployed in the cloud. This is a very cool system using all Oracle technologies including Oracle Database and Fusion Middleware MapViewer. Attendees learned how to gather market insight, score prospects and customers and perform location analysis. The replay is available here. Our final webinar of the year focused on using Oracle Business Intelligence tools, along with Oracle Spatial and Graph to perform location-aware predictive analysis. Watch the webcast here: In June, we joined up with the Location Intelligence conference in Washington, DC, and had a very successful 2012 Oracle Spatial User Conference. Customers and partners from the US, as well as from EMEA and Asia, flew in to share experiences and ideas, and get technical updates from Oracle experts. Users were excited to hear about spatial-Exadata performance, and advances in MapViewer and BI. Peter Doolan of Oracle Public Sector kicked off the event with a great keynote, and US Census, NOAA, and Ordnance Survey Great Britain were just a few of the presenters. Presentation archive here. We recognized some of the most exceptional partners and customers for their contributions to advancing mainstream solutions using geospatial technologies. Planning for 2013’s conference has already started. Please contribute your papers for consideration here. http://www.locationintelligence.net/ We also launched a new Oracle PartnerNetwork Spatial Specialization – to enable partners to get validated in the marketplace for their expertise in taking solutions to market. Individuals can also get individual certifications. Learn more here. Oracle Open World was not to disappoint, with news regarding our next Oracle Spatial and Graph release, as well as the announcement of our new Oracle Spatial and Graph SIG board! Join the SIG today. One more exciting event as we look to 2013. Spatial and location technologies have a dedicated track at the January BIWA SIG Summit – on January 9-10 in Redwood Shores, CA. View the agenda and register here: www.biwasummit.org. We thank you for all your support during the year of 2012 and look towards an even more exciting 2013! Wishing you and your family a prosperous New Year and Happy Holidays!

    Read the article

  • Fun with Python

    - by dotneteer
    I am taking a class on Coursera recently. My formal education is in physics. Although I have been working as a developer for over 18 years and have learnt a lot of programming on the job, I still would like to gain some systematic knowledge in computer science. Coursera courses taught by Standard professors provided me a wonderful chance. The three languages recommended for assignments are Java, C and Python. I am fluent in Java and have done some projects using C++/MFC/ATL in the past, but I would like to try something different this time. I first started with pure C. Soon I discover that I have to write a lot of code outside the question that I try to solve because the very limited C standard library. For example, to read a list of values from a file, I have to read characters by characters until I hit a delimiter. If I need a list that can grow, I have to create a data structure myself, something that I have taking for granted in .Net or Java. Out of frustration, I switched to Python. I was pleasantly surprised to find that Python is very easy to learn. The tutorial on the official Python site has the exactly the right pace for me, someone with experience in another programming. After a couple of hours on the tutorial and a few more minutes of toying with IDEL, I was in business. I like the “battery supplied” philosophy that gives everything that I need out of box. For someone from C# or Java background, curly braces are replaced by colon(:) and tab spaces. Although I tend to miss colon from time to time, I found that the idea of tab space is actually very nice once I get use to them. I also like to feature of multiple assignment and multiple return parameters. When I need to return a by-product, I just add it to the list of returns. When would use Python? I would use Python if I need to computer anything quick. The language is very easy to use. Python has a good collection of libraries (packages). The REPL of the interpreter allows me test ideas quickly before committing them into script. Lots of computer science work have been ported from Lisp to Python. Some universities are even teaching SICP in Python. When wouldn’t I use Python? I mostly would not use it in a managed environment, such as Ironpython or Jython. Both .Net and Java already have a rich library so one has to make a choice which library to use. If we use the managed runtime library, the code will tie to the particular runtime and thus not portable. If we use the Python library, then we will face the relatively long start-up time. For this reason, I would not recommend to use Ironpython for WP7 development. The only situation that I see merit with managed Python is in a server application where I can preload Python so that the start-up time is not a concern. Using Python as a managed glue language is an over-kill most of the time. A managed Scheme could be a better glue language as it is small enough to start-up very fast.

    Read the article

  • Guaranteed Restore Points as Fallback Method

    - by Mike Dietrich
    Thanks to the great audience yesterday in the Upgrade & Migration Workshop in Utrecht. That was really fun and I was amazed by our new facilities (and the  "wellness" lights surrounding the plenum room's walls). And another reason why I like to do these workshops is that often I learn new things from you So credits here to Rick van  Ek who has highlighted the following topic to me. Yesterday (and in some previous workshops) I did mention during the discussion about Fallback Strategies that you'll have to switch on Flashback Database beforehand to create a guaranteed restore point in case you'll encounter an issue during the database upgrade. I knew that we've made it possible since Oracle Database 11.2 to switch Flashback Database on without taking the database into MOUNT status (you could switch it off anyway while the database is open before in all releases). But before Oracle Database 11.2 that did require MOUNT status. SQL> create restore point rp1 guarantee flashback database ; create restore point rp1 guarantee flashback database * ERROR at line 1: ORA-38784: Cannot create restore point 'RP1'. ORA-38787: Creating the first guaranteed restore point requires mount mode when flashback database is off. But Rick did mention that I won't need to switch Flashback Database On to create a guaranteed restore point. And he's right - in older releases I would have had to go into MOUNT state to define the restore point which meant to restart the database. But in 11.2 that's no necessary anymore. And the same will apply when you upgrade your pre-11.2 database (e.g. an Oracle Database 10.2.0.4) to Oracle Database 11.2. As soon as you start your "old" not-yet-upgraded database in your 11.2 environment with STARTUP UPGRADE you can define a guaranteed restore point. If you tail the alert.log you'll see that the database will start the RVWR (Recovery Writer) background process - you'll just have to make sure that you'd define the values for db_recovery_file_dest_size and db_recovery_file_dest. SQL> startup upgrade ORACLE instance started. Total System Global Area  417546240 bytes Fixed Size                  2228944 bytes Variable Size             134221104 bytes Database Buffers          272629760 bytes Redo Buffers                8466432 bytes Database mounted. Database opened. SQL> create restore point grpt guarantee flashback database; Restore point created.SQL> drop restore point grpt; And don't forget to drop that restore point the sooner or later as it is guaranteed - and will fill up your Fast Recovery Area pretty quickly Just on the side: in any case archivelog mode is required if you'd like to work with restore points. - Mike

    Read the article

  • ASP.NET Multi-Select Radio Buttons

    - by Ajarn Mark Caldwell
    “HERESY!” you say, “Radio buttons are for single-select items!  If you want multi-select, use checkboxes!”  Well, I would agree, and that is why I consider this a significant bug that ASP.NET developers need to be aware of.  Here’s the situation. If you use ASP:RadioButton controls on your WebForm, then you know that in order to get them to behave properly, that is, to define a group in which only one of them can be selected by the user, you use the Group attribute and set the same value on each one.  For example: 1: <asp:RadioButton runat="server" ID="rdo1" Group="GroupName" checked="true" /> 2: <asp:RadioButton runat="server" ID="rdo2" Group="GroupName" /> With this configuration, the controls will render to the browser as HTML Input / Type=radio tags and when the user selects one, the browser will automatically deselect the other one so that only one can be selected (checked) at any time. BUT, if you user server-side code to manipulate the Checked attribute of these controls, it is possible to set them both to believe that they are checked. 1: rdo2.Checked = true; // Does NOT change the Checked attribute of rdo1 to be false. As long as you remain in server-side code, the system will believe that both radio buttons are checked (you can verify this in the debugger).  Therefore, if you later have code that looks like this 1: if (rdo1.Checked) 2: { 3: DoSomething1(); 4: } 5: else 6: { 7: DoSomethingElse(); 8: } then it will always evaluate the condition to be true and take the first action.  The good news is that if you return to the client with multiple radio buttons checked, the browser tries to clean that up for you and make only one of them really checked.  It turns out that the last one on the screen wins, so in this case, you will in fact end up with rdo2 as checked, and if you then make a trip to the server to run the code above, it will appear to be working properly.  However, if your page initializes with rdo2 checked and in code you set rdo1 to checked also, then when you go back to the client, rdo2 will remain checked, again because it is the last one and the last one checked “wins”. And this gets even uglier if you ever set these radio buttons to be disabled.  In that case, although the client browser renders the radio buttons as though only one of them is checked the system actually retains the value of both of them as checked, and your next trip to the server will really frustrate you because the browser showed rdo2 as checked, but your DoSomething1() routine keeps getting executed. The following is sample code you can put into any WebForm to test this yourself. 1: <body> 2: <form id="form1" runat="server"> 3: <h1>Radio Button Test</h1> 4: <hr /> 5: <asp:Button runat="server" ID="cmdBlankPostback" Text="Blank Postback" /> 6: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7: <asp:Button runat="server" ID="cmdEnable" Text="Enable All" OnClick="cmdEnable_Click" /> 8: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9: <asp:Button runat="server" ID="cmdDisable" Text="Disable All" OnClick="cmdDisable_Click" /> 10: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 11: <asp:Button runat="server" ID="cmdTest" Text="Test" OnClick="cmdTest_Click" /> 12: <br /><br /><br /> 13: <asp:RadioButton ID="rdoG1R1" GroupName="Group1" runat="server" Text="Group 1 Radio 1" Checked="true" /><br /> 14: <asp:RadioButton ID="rdoG1R2" GroupName="Group1" runat="server" Text="Group 1 Radio 2" /><br /> 15: <asp:RadioButton ID="rdoG1R3" GroupName="Group1" runat="server" Text="Group 1 Radio 3" /><br /> 16: <hr /> 17: <asp:RadioButton ID="rdoG2R1" GroupName="Group2" runat="server" Text="Group 2 Radio 1" /><br /> 18: <asp:RadioButton ID="rdoG2R2" GroupName="Group2" runat="server" Text="Group 2 Radio 2" Checked="true" /><br /> 19:  20: </form> 21: </body> 1: protected void Page_Load(object sender, EventArgs e) 2: { 3:  4: } 5:  6: protected void cmdEnable_Click(object sender, EventArgs e) 7: { 8: rdoG1R1.Enabled = true; 9: rdoG1R2.Enabled = true; 10: rdoG1R3.Enabled = true; 11: rdoG2R1.Enabled = true; 12: rdoG2R2.Enabled = true; 13: } 14:  15: protected void cmdDisable_Click(object sender, EventArgs e) 16: { 17: rdoG1R1.Enabled = false; 18: rdoG1R2.Enabled = false; 19: rdoG1R3.Enabled = false; 20: rdoG2R1.Enabled = false; 21: rdoG2R2.Enabled = false; 22: } 23:  24: protected void cmdTest_Click(object sender, EventArgs e) 25: { 26: rdoG1R2.Checked = true; 27: rdoG2R1.Checked = true; 28: } 29: 30: protected void Page_PreRender(object sender, EventArgs e) 31: { 32:  33: } After you copy the markup and page-behind code into the appropriate files.  I recommend you set a breakpoint on Page_Load as well as cmdTest_Click, and add each of the radio button controls to the Watch list so that you can walk through the code and see exactly what is happening.  Use the Blank Postback button to cause a postback to the server so you can inspect things without making any changes. The moral of the story is: if you do server-side manipulation of the Checked status of RadioButton controls, then you need to set ALL of the controls in a group whenever you want to change one.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Ubuntu 12.04 host – Virtualbox 4.1.12 Guest=Windows 7 – Network will not connect

    - by user287529
    Ubuntu 12.04 host – Virtualbox 4.1.12 Guest=Windows 7 – Network will not connect. I'm using Ubuntu 12.04 on an Acer Aspire 5742-7645 laptop with 4GB memory, Intel Core i3 processor, Intel HD Graphics, DVD drive, 802.1 b/g/n, and 500 GB HD. I connect to my router via a wireless connection. I have installed Virutalbox 4.1.12 from the Ubuntu Software Center and installed Guest additions 4.1.12 in the Windows 7 guest session. I have Windows XP and Windows 7 installed as guests in Virtual box The network settings are different for XP and 7 – see below. Network Settings XP guest = Adapter 1: PCnet-FAST III (NAT) - Network works perfectly and has worked well for several years. Network Settings Win 7 = Adapter 1: Intel PRO/1000 MT Desktop (Bridged adapter, eth1) Promiscuous Mode = allow all Cable connected = checked When I originally installed Windows 7, I tried NAT and the guest network would not connect. Once I changed the setting to the above (Bridged) the Network worked perfectly. However, what I believe is after updates (not sure if it was an Ubuntu or Windows update) the guest network stopped working and I can not get it to connect. Interfaces file content auto lo iface lo inet loopback Ifconfig yields lou@lou-Aspire-5742:~$ ifconfig eth0 Link encap:Ethernet HWaddr 1c:75:08:09:f6:5c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 eth1 Link encap:Ethernet HWaddr 4c:0f:6e:7c:9f:01 inet addr:192.168.1.104 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4e0f:6eff:fe7c:9f01/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18095 errors:2 dropped:0 overruns:0 frame:24344 TX packets:9281 errors:47 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5301926 (5.3 MB) TX bytes:1441885 (1.4 MB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3208 errors:0 dropped:0 overruns:0 frame:0 TX packets:3208 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:294088 (294.0 KB) TX bytes:294088 (294.0 KB) Ipconfig yields the following: Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::38ba:dbca:a21d:c3d1%13 Autoconfiguration IPv4 Address. . : 169.254.195.209 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{B292E440-679D-4FC5-8E34-77D6804669C8}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : I'm not sure what else to do. Can someone provide the troubleshooting steps to determine what the problem is and possible solution?

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Better way to load level content in XNA?

    - by user2002495
    Currently I loaded all my assets in XNA in the main Game class. What I want to achieve later is that I only load specific assets for specific levels (the game will consist of many levels). Here is how I load my main assets into the main class: protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); plane = new Player(Content.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; lvl1 = new Level1(Content.Load<Texture2D>(@"Levels/bgLvl1"), Content.Load<Texture2D>(@"Levels/bgLvl1-other"), new Vector2(0, 0), new Vector2(0, -600)); CommonBullet.LoadContent(Content); CommonEnemyBullet.LoadContent(Content); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); plane.Update(gameTime); lvl1.Update(gameTime); foreach (CommonEnemy ce in cel) { if (ce.CollidesWith(plane)) { ce.hasSpawn = false; } foreach (CommonBullet b in plane.commonBulletList) { if (b.CollidesWith(ce)) { ce.hasSpawn = false; } } ce.Update(gameTime); } LoadCommonEnemy(); base.Update(gameTime); } private void LoadCommonEnemy() { int randY = rand.Next(-600, -10); int randX = rand.Next(0, 750); if (cel.Count < 3) { cel.Add(new CommonEnemy(Content.Load<Texture2D>(@"Enemy/Common/commonEnemySprite"), 7, 2, "left", randX, randY)); } for (int i = 0; i < cel.Count; i++) { if (!cel[i].hasSpawn) { cel.RemoveAt(i); i--; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(); lvl1.Draw(spriteBatch); plane.Draw(spriteBatch); foreach (CommonEnemy ce in cel) { ce.Draw(spriteBatch); } spriteBatch.End(); base.Draw(gameTime); } I wish to load my players, enemies, all in Level1 class. However, when I move my player & enemy code into the Level1 class, the gameTime returns null. Here is my Level1 class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Input; using SpaceShooter_Beta.Animation.PlayerCollection; using SpaceShooter_Beta.Animation.EnemyCollection.Common; namespace SpaceShooter_Beta.Levels { public class Level1 { public Texture2D bgTexture1, bgTexture2; public Vector2 bgPos1, bgPos2; public float speed = 5f; Player plane; public Level1(Texture2D texture1, Texture2D texture2, Vector2 pos1, Vector2 pos2) { this.bgTexture1 = texture1; this.bgTexture2 = texture2; this.bgPos1 = pos1; this.bgPos2 = pos2; } public void LoadContent(ContentManager cm) { plane = new Player(cm.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; } public void Draw(SpriteBatch sb) { sb.Draw(bgTexture1, bgPos1, Color.White); sb.Draw(bgTexture2, bgPos2, Color.White); plane.Draw(sb); } public void Update(GameTime gt) { bgPos1.Y += speed; bgPos2.Y += speed; if (bgPos1.Y >= 600) { bgPos1.Y = -600; } if (bgPos2.Y >= 600) { bgPos2.Y = -600; } plane.Update(gt); } } } Of course when I did this, I delete all my player's code in the main Game class. All of that works fine (no errors) except that the game cannot start. The debugger says that plane.Update(gt); in Level 1 class has null GameTime, same thing with the Draw method in the Level class. Please help, I appreciate for the time. [EDIT] I know that using switch in the main class can be a solution. But I prefer a cleaner solution than that, since using switch still means I need to load all the assets through the main class, the code will be A LOT later on for each levels

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • Build 2012, the first post

    - by Dennis Vroegop
    Yes, I was one of the lucky few who made it to Build. Build, formerly known as the Professional Developers Conference (or PDC) is the place to be if you are a developer on the Microsoft platform. Since I take my job seriously I took out some time on my busy schedule, sighed at the thought of not seeing my family for another week and signed up for it. Now, before I talk about the amazing Surface devices (yes, this posting is written on one of them), the great Lumia 920 we all got, the long deserved love for touch, NUI and other things I have been talking about for years, I need to do some ranting. So if you are anxious to read about the technical goodies you’ll have to wait until the next post. Still here? Good. When I signed up for the Build conference during my holidays this summer it was pretty obvious that demand would be high. Therefor I made sure I was on time. But even though I registered only 7 minutes after the initial opening time the Early Bird discount for the first 500 attendees was already sold out. I later learned that registration actually started 5 minutes before the scheduled time but even though it is still impressive how fast things went. The whole event sold out in 57 minutes Or so they say… A lot of people got put on the waiting list. There was room for about 1500 attendees and I heard that at least 1000 people were on that waiting list, including a lot of people I know. Strangely, all of them got tickets assigned after 2 weeks. Here at the conference I heard from a guy from Nokia that they had shipped 2500 Lumia 920 phones. That number matches the rumors that the organization added 1000 extra tickets. This, of course is no problem. I am not an elitist and I think large crowds have a special atmosphere that I quite like. But…. The Microsoft Campus is not equipped for that sheer volume of visitors. That was painfully obvious during on-site registration where people had to stand in line for over 2 hours. The conference is spread out over 2 buildings, divided by a 15 minute busride (yes, the campus is that big). I have seen queues of over 200 people waiting for the bus and when that arrived it had a capacity of 16. I can assure you: that doesn’t fit. This of course means that travelling from one site to the other might take about 30 minutes. So you arrive at the session room just in time, only to find out it’s full. Since you can’ get into that session you try to find another one but now you’re even more late so you have no chance at all of entering. The doors are closed and you’re told: “Well, you can watch the live stream online”. Mmmm… So I spend thousands of dollars, a week away from home, family and work to be told I can also watch the sessions online? Are you fricking kidding me? I could go on but I won’t. You get the idea. It’s jus badly organized, something I am not really used to in my 20 years of experience at Microsoft events. Yes, I am disappointed. I hope a lot of people here in Redmond will also fill in the evals and that the organization next year will do a better job. Really, Build deserves better. </rantmode>

    Read the article

  • How do I get to work my Atheros AR9485 Wireless card in Ubuntu 14.04 LTS?

    - by Ivan Kreimer
    I've recently installed Ubuntu 14.04 LTS in my ASUS D550CA, and so far things have gone great. The only problem I've got is the Wi-Fi. It doesn't work. I've got an Qualcomm Atheros AR9485. I've tried installing the drivers, but the sytem says it doesn't find any. So I started looking around this forum for solutions. I've read every single post from this forum about my Wi-Fi network adapter, and I've found nothing that solves my problem. Let me give you some info about my configuration. Disclaimer: my configuration is in spanish, so if you don't understand something you can either use Google translate, or use your imagination. :D When I run $ sudo lshw -C network this is what I get: *-network DEACTIVATED descripción: Interfaz inalámbrica producto: AR9485 Wireless Network Adapter fabricante: Qualcomm Atheros id físico: 0 información del bus: pci@0000:02:00.0 nombre lógico: wlan0 versión: 01 serie: 28:e3:47:5c:5d:3f anchura: 64 bits reloj: 33MHz capacidades: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuración: broadcast=yes driver=ath9k driverversion=3.13.0-34-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn recursos: irq:17 memoria:f7d00000-f7d7ffff memoria:f7d80000-f7d8ffff *-network descripción: Ethernet interface producto: RTL8101E/RTL8102E PCI Express Fast Ethernet controller fabricante: Realtek Semiconductor Co., Ltd. id físico: 0.2 información del bus: pci@0000:03:00.2 nombre lógico: eth0 versión: 06 serie: e0:3f:49:ce:57:49 tamaño: 100Mbit/s capacidad: 100Mbit/s anchura: 64 bits reloj: 33MHz capacidades: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuración: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8402-1_0.0.1 10/26/11 ip=181.165.245.39 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s recursos: irq:41 ioport:e000(size=256) memoria:f0004000-f0004fff memoria:f0000000-f0003fff At the beginning, you'll see that it says "*-network DEACTIVATED" (or at least that's what I translated), is that something bad? Then, when I run ipconfig this is what I get: eth0 Link encap:Ethernet direcciónHW e0:3f:49:ce:57:49 Direc. inet:181.165.245.39 Difus.:181.165.245.255 Másc:255.255.255.0 Dirección inet6: fe80::e23f:49ff:fece:5749/64 Alcance:Enlace ACTIVO DIFUSIÓN FUNCIONANDO MULTICAST MTU:1500 Métrica:1 Paquetes RX:221199 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:62025 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:1000 Bytes RX:124409589 (124.4 MB) TX bytes:7471899 (7.4 MB) lo Link encap:Bucle local Direc. inet:127.0.0.1 Másc:255.0.0.0 Dirección inet6: ::1/128 Alcance:Anfitrión ACTIVO BUCLE FUNCIONANDO MTU:65536 Métrica:1 Paquetes RX:2977 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:2977 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:0 Bytes RX:397158 (397.1 KB) TX bytes:397158 (397.1 KB) When I put iwconfig: eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Finally, when I put sudo rfkill list all, I got: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: asus-bluetooth: Bluetooth Soft blocked: no Hard blocked: no I hope anyone can help me solve this, I've searched a lot and found no solution. Thanks a lot.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >