Search Results

Search found 33677 results on 1348 pages for 'access levels'.

Page 444/1348 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Can't write to disk

    - by nofacts
    I have seen questions similar to this, but the answers are either beyond me or the situation doesn't quite match mine. Would appreciate some direction. I recently installed Ubuntu 12.04 LTS. The OS is on a disk formatted as ext4. I added another disk to the system and formatted it as W95 FAT 32 (LBA) (0x0c). I did this because I am moving from Windows to Linux, will be needing to go back and forth with data for a while, and might need to move the disk to a Windows machine. There may have been a better format to use, but if so I didn't know any better. I was able to transfer data from an external drive to this FAT32 drive with no problem. Now, though, when I try to create a new folder or write a file to the disk I get a message that the disk is read-only. If I go to the properties, permissions for the disk, for Folder Access it says 'create and delete files'. If I try to change File Access underneath to 'read and write', either nothing happens or I get a message it can't be done. Thank you for any help.

    Read the article

  • How to write code that communicates with an accelerator in the real address space (real mode)?

    - by ysap
    This is a preliminary question for the issue, where I was asked to program a host-accelerator program on an embedded system we are building. The system is comprised of (among the standard peripherals) an ARM core and an accelerator processor. Both processors access the system bus via their bus interfaces, and share the same 32-bit global physical memory space. Both share access to the system's DRAM through the system bus. (The computer concept is similar to Beagleboard/raspberry Pie, but with a specialized accelerator added) The accelerator has its own internal memory (SRAM) which is exposed to the system and occupies a portion of the global address space (as opposed to how a graphics card would talk to teh CPU via a "small" aperture in the system memory space). On the ARM core (the host) we plan on running Ubuntu 12.04. The mode of operation of communicating between the processors should be that the host issues memory transactions on the system bus that are targeted at the accelerator internal memory. As far as my understanding goes, if I write a program for the host that simply writes to the physical address of the accelerator, most chances are that the program will crash due to a segmentation violation. So, I assume that I need some way of communicating with the device in real mode. What is the easiest way to achieve this mode of operation?

    Read the article

  • ksoftirqd uses 100% cpu

    - by andy
    I am running 32bit Ubuntu 10.04. A lot of the times ksoftirqd/0 or ksoftirqd/1 start using up 100% CPU for no apparent reason, and I am forced to reboot my laptop. Incidentally this also happens when I maximize my (youtube) videos on Chrome and Fireox, but once I un-maximize the videos the CPU usage goes down to the original levels. Any ideas what it going on? --- Addendum --- dmesg produces a ~2000 line output. I searched for 'error' and 'warning' in the output, and here are the relevant lines (along with some headers): [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 2.6.32-21-generic (buildd@yellow) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #32-Ubuntu SMP Fri Apr 16 08:09:38 UTC 2010 (Ubuntu 2.6.32-21.32-generic 2.6.32.11+drm33.2) [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-21-generic root=UUID=157dcfda-acd6-4d1b-a6a8-ff9ccff61906 ro quiet splash [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] BIOS-provided physical RAM map: [ 24.775546] EXT3-fs warning: mounting fs with errors, running e2fsck is recommended [44920.210518] ata1: SError: { PHYRdyChg CommWake 10B8B Dispar LinkSeq TrStaTrns } [44920.210531] res 40/00:00:f0:4b:7f/00:00:18:00:00/40 Emask 0x10 (ATA bus error) [58673.134623] chrome[20101]: segfault at 7f38bc4ad000 ip 00007f38be769ecc sp 00007fff24616850 error 4 in libpepflashplayer.so[7f38bdc08000+e55000] [ 24.775546] EXT3-fs warning: mounting fs with errors, running e2fsck is recommended [44920.210531] res 40/00:00:f0:4b:7f/00:00:18:00:00/40 Emask 0x10 (ATA bus error)

    Read the article

  • Two Cloudy Observations from Oracle OpenWorld

    - by GeneEun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. Yes, the Oracle Database 12c announcement was also huge, but for most it was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • Dealing with the node callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Announcing SharePoint Saturday Columbus 2010

    - by Brian Jackett
    It is with great pleasure that today I can announce the very first SharePoint Saturday Columbus.  SharePoint Saturday Columbus 2010 will be happening on August 14th at The Conference Center at OCLC in Dublin, OH.  As many of the readers of my blog may be aware I’ve attended or spoken at over half a dozen SharePoint Saturdays in the past 8 months alone, but this will be my first time actually organizing one.  Myself and a group of very dedicated individuals have been hard at work the past few months getting the ball rolling and we’re happy to see it taking shape.   Pertinent Resources Website – find announcements and up to the date details at www.SharePointSaturday.org/Columbus Twitter – follow us at @SPSColumbus Email – email us at [email protected] with any questions, comments, or concerns   What can you do?     There are three main areas that we are looking for your help at this time. Spread the word – simply put start spreading the word to friends, coworkers, user groups, clients, and anyone else you think may be interested in SharePoint Saturday Columbus 2010.  We’ll be opening registration in early July so look for an announcement with details closer to that timeframe. Sponsorship – if your company or a company you know is interested in sponsoring SharePoint Saturday Columbus 2010 we have many opportunity levels available.  Email [email protected] for more information and we’ll send you a sponsorship packet. Speakers – if you or someone you know is interested in presenting at SharePoint Saturday Columbus 2010 please fill out a speaker submission form found here and email it to [email protected] by July 10th. I hope you can join us for this great event!         -Frog Out

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • November New Member Offers

    - by Cassandra Clark - OTN
    Happy November!  OTN has worked with its partners to bring you more new offers or extend their existing ones.Oracle Press New Offer-Oracle Technology Network members get 40% off the newest Oracle Press titles by Oracle ACE Mark Rittman, Oracle Business Intelligence 11g Developers Guide and Oracle Exalytics Revealed  (ebook format only).Extended Offers - Oracle Store - Save 10% on Your Next Software Purchase from the Oracle StorePearson Publistiong - 35% off Hacker’s Delight Manning Publishing - 41% off the MEAP, eBook and print format of the following books: Making Java Groovy OCA Java SE 7 Programmer I Certification Guide Safari Books Online - OTN members get 30 days of free access + 20% off unlimited access to Safari Books Online for 6 months. Packt Publishing - 25% off the print books and 35% off the eBooks listed below: Getting Started with Oracle Data Integrator 11g: A Hands-On Tutorial Oracle Business Intelligence Enterprise Edition 11g: A Hands-On Tutorial  Oracle Certified Associate, Java SE 7 Programmer Study Guides. Murach  Publishing -  Get 30% off for OTN members - Murach’s SQL Server 2012 for Developers by Bryan Syverson and Joel Murach. Get all of this From the OTN Member Discount Page!

    Read the article

  • Existing laravel 4 project gives 404 in browser

    - by Richard A
    I'm trying to set up a development environment on a virtual machine running Ubuntu 14.04 LTS using Nginx and HHVM. To do this, I followed the tutorial here. This goes well with a new installation of Laravel. But when I import an existing Laravel 4 project and try to open that on my actual machine (which will serve as the client running Windows 7), I'm getting a 404 File Not Found error on the screen while connecting to http://sav.savrichard.dev. I did add this to the hosts file with the correct IP Address. The virtual machine is receiving the request and responds with a 404 error. How do I solve this error? I'm pretty new to Ubuntu so I'm not exactly sure what's wrong. The project is located at /var/www/sav.savrichard.net The server configuration is as follow: server { listen 80 default_server; root /var/www/sav.savrichard.net/public; index index.html index.htm index.php; server_name sav.savrichard.dev; access_log /var/log/nginx/localhost.sav.savrichard.dev-access.log; error_log /var/log/nginx/localhost.sav.savrichard.dev-error.log error; charset utf-8; location / { try_files \$uri \$uri/ /index.php?\$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } error_page 404 /index.php; include hhvm.conf; # Deny .htaccess file access location ~ /\.ht { deny all; } } And the hhvm.conf file is: location ~ \.(hh|php)$ { fastcgi_keep_conn on; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • What is a 'good number' of exceptions to implement for my library?

    - by Fuzz
    I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?

    Read the article

  • eSTEP Newsletter for the technical EMEA partner community

    - by mseika
    We are pleased to present to you the first issue of the eSTEP Newsletter, which is dedicated to support the technical EMEA partner community in the effort to provide more information on what is going on within the corporation, what is the technical news regarding Hardware, events and all the important things which we think may be of interest to you. Invitation: STEP TechCast: Oracle Solaris 11 Express Get an insight on how Oracle Solaris 11 Express has raised the bar on the innovation introduced in Oracle Solaris 10. Learn about the new integrated features such as: network based package management tools improvements to built-in virtualization new virtualised network architecture security enhancements file system evolution  Learn how Oracle Solaris 11 Express provides greatly decreased planned system downtime, performs a completely safe system upgrade, achieves an unprecedented level of flexibility for application consolidation, and provides the highest levels of security in your datacenter. Date and time: Thursday, 7. July 2011, 13:00 - 14:00 CEST Speaker: Joost Pronk van Hoogeveen Target audience: Tech Presales Webcast Coordinates: You will find the coordinates in the eSTEP portal under the Events tab. Use your email-adress and PIN: eSTEP_2011 to get access. We are happy to get your comments and feedback.

    Read the article

  • Accessing second hard drive

    - by Jonathan
    So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris @djeykib So very close to fixing it.. unfortunately on the last command you gave it says this: $ sudo apt-get install linux-lts-backport-natty Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-lts-backport-natty Checking on http://www.ubuntuupdates.org/ppas reveals that it is only available for 10.04. Looks like I'll have to unplug and re-plug hardware if I want it working still :(

    Read the article

  • OpenWorld 2011 Video Index

    - by Chris Kawalek
    We did quite a few virtualization videos this year at Oracle OpenWorld 2011. You can find all these and more on our YouTube channel. Virtualization Wrapup Adam Hawley discusses the Oracle virtualization presence at Oracle OpenWorld 2011. http://www.youtube.com/oraclevirtualization#p/f/2/53_SQYljqN4 Oracle Applications on iPad Brad Lackey shows how you can access Oracle Applications on iPad. http://www.youtube.com/oraclevirtualization#p/f/9/3Ug5km3uxEQ Thinkquest.org and Oracle VM Dan Herrup describes how Thinkquest.org is using Oracle VM to help kids learn how to solve real world problems with computer technology. http://www.youtube.com/oraclevirtualization#p/f/6/Bw-km5kqzEo Avaya and Oracle Virtualization See Oracle desktop virtualization in action at Avaya's booth. http://www.youtube.com/oraclevirtualization#p/f/4/xIHRIijEPkM Eco-Features of Sun Ray Clients Michael Dann shows off the Sun Ray 3 Plus and talks about the eco benefits of Oracle's extremely low power consumption client device for desktop virtualization. http://www.youtube.com/oraclevirtualization#p/f/3/ulArHGe1OmM Application and Desktop Access with Oracle Secure Global Desktop Watch Jeff Harvey do a quick demo of Oracle Secure Global Desktop accessing Oracle Applications. http://www.youtube.com/oraclevirtualization#p/f/5/g_ikA7dwh0g Oracle VM VirtualBox for VDI Andy Hall describes how enterprises leverage Oracle VM VirtualBox as part of their VDI deployments. http://www.youtube.com/oraclevirtualization#p/f/8/WmkeYlzgnZ8 TechCast Live: The Coolest Virtualization Products Interview with Andy Hall about the desktop virtualization portfolio. http://www.youtube.com/oraclevirtualization#p/f/7/VMkrAhZ83AA

    Read the article

  • Need help with ColdFusion and ASP.NET site [closed]

    - by Michael Stone
    To begin, I wasn't too sure how to title this.. I've got a few questions. First off, I've got a very big site that's in ColdFusion and we've been migrating to ASP.NET C# 4.0 the last 8 months. I've got a team of 7 programmers and no one can seem to figure out these answers, not even our senior C# programmer. We're using Team Foundation Server and we can't figure out how to only push up one small change at time. Right now we're stuck to publishing the entire site and it's causing serious issues. We've currently got the site as a Project and not a Website. We're wondering if that's one issue. I actually think it might be a problem. We're also dealing with an issue where we can't access our regular folders with relative paths. So we're first developing our admin side in .NET and We've got our regular site and then we've got another site within that for our .NET admin tools. By site, I'm referring to them actually being Sites in IIS. This also creates a problem for us when we're creating tools that upload images and want to store them and access them from our parent Site. I'd very much appreciate any advice on how to go about this in the most standardized way. So what I'm hoping for is advise on: -Publishing and managing a site/project in Team Foundation Server. Being able to push up one fix at a time if needed would be GREAT! -Any help figuring out the issuing referencing folders from my .NET child site to my parent ColdFusion site using regular relative paths. "/a/images/b/" would be nice nice instead of only being able to do "/b/images/" We're using ColdFusion 8, C# Asp.NET 4.0/Entity Framework/POCO Templates, and a Windows 2008 R2 Server. Thank you in advance for any help.

    Read the article

  • Player & Level class structure in 2D python console game?

    - by Markus Meskanen
    I'm trying to create a 2D console game, where I have a player who can freely move around in a level (~map, but map is a reserved keyword) and interfere with other objects. Levels construct out of multiple Blocks, such as player(s), rocks, etc. Here's the Block class: class Block(object): def __init__(self, x=0, y=0, char=' ', solid=False): self.x = x self.y = y self.char = char self.solid = solid As you see, each block has a position (x, y) and a character to represent the block when it's printed. Each block also has a solid attribute, defining whether it can overlap with other solids or not. (Two solid blocks cannot overlap) I've now created few subclasses from Block (Rock might be useless for now) class Rock(Block): def __init__(self, x=0, y=0): super(Rock, self).__init__(x, y, 'x', True) class Player(Block): def __init__(self, x=0, y=0): super(Player, self).__init__(x, y, 'i', True) def move_left(self, x=1): ... # How do I make sure Player wont overlap with rocks? self.x -= x And here's the Level class: class Level(object): def __init__(self, name='', blocks=None): self.name = name self.blocks = blocks or [] Only way I can think of is to store a Player instance into Level's attributes (self.player=Player(), or so) and then give Level a method: def player_move_left(self): for block in self.blocks: if block.x == self.player.x - 1 and block.solid: return False But this doesn't really make any sense, why have a Player class if it can't even be moved without Level? Imo. player should be moved by a method inside Player. Am I wrong at something here, if not, how could I implement such behavior?

    Read the article

  • Gmail Now Supports Google Drive Integration; Share Files Up to 10GB

    - by Jason Fitzpatrick
    Gmail users can now easily send large files thanks to Google Drive’s increased integration with Gmail–blow through the 25MB in-email attachment limit and share files up to 10GB. From the official Gmail announcement: Have you ever tried to attach a file to an email only to find out it’s too large to send? Now with Drive, you can insert filesup to 10GB – 400 times larger than what you can send as a traditional attachment. Also, because you’re sending a file stored in the cloud, all your recipients will have access to the same, most-up-to-date version.  Like a smart assistant, Gmail will also double-check that your recipients all have access to any files you’re sending. This works like Gmail’s forgotten attachment detector: whenever you send a file from Drive that isn’t shared with everyone, you’ll be prompted with the option to change the file’s sharing settings without leaving your email. It’ll even work with Drive links pasted directly into emails.  The new Gmail/Drive integration is rolling out in waves to users over the next few days and is accessible via the new Gmail compose window. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • How to host an AP or a hotspot?

    - by user1048138
    I'm running Ubuntu 12.04 as a virtual machine on my Mac. Since I am unable to get the virtual machine to have full access to my WiFi card, I bought another USB WiFi card to use. This is my WiFi card. If you are unfamiliar with Virtual machine, as far as I know, since the Ubuntu has its own card now, it shouldn't matter. I have followed these guides with no luck: https://help.ubuntu.com/community/WifiDocs/WirelessAccessPoint http://www.danbishop.org/2011/12/11/using-hostapd-to-add-wireless-access-point-capabilities-to-an-ubuntu-server/ The problem is that the WiFi connection appears on all of the machines that I have in my house: 2 iPhones, Dell machine running Ubuntu and two Macbooks. However the connection times out on all of these machines. Questions: Could this be a driver issue if that same WiFi card can connect to other WiFi points and use its internet Could this be DHCP related? I would think not. It should at least get a 169.X.X.X address? No? Any solutions for me?

    Read the article

  • Kickstarter and 2D smartphone games

    - by mm24
    I am about to launch a Kickstarter project as, after 14 months of full time development on my first iOS game, I run out of money. I developed an iOS game that needs few more months to be ready (the game structure is there but haven't yet worked on balancing the difficulty of the various levels). I have a feeling that most of the computer games founded on Kickstarter are for console, PC or Mac and not for smartphones. The category that many people seem to like is RPG style games. I have done tons of work over a year and collaborated with musicians and illustrators to get top quality graphics and music. The game looks cool to be an iOS 2D game but, compared to what I've seen on Kickstarter, I feel so little and humbled. I have searched for smartphone game projects on Kickstarter but haven't found many. I believe that the reason is that people are not keen in backing an APP that is normally sold for 0.99$ as they perceive is not something big. Am I the only one having this feeling? Could anyone please share a list of references to some successfully backed kickstarter smartphone game projects? (In this way the question will not become a "chat" and will fulfill the requirements to be a gamedev question). Any other article or authoritative answer will be welcome.

    Read the article

  • Oracle Endeca "Getting Started" Partner Guide

    - by Grant Schofield
    For partners looking for a concise step by step guide to getting started with Oracle Endeca Information Discovery, here it is to help you get started as quickly as possible. Step 1: Join the Knowledge Zone as a company and an individual - this will give you a) the right to resell Oracle Endeca ID, and b) notice of any free / subsidised training events in your region Step 2: For a quick general overview & positioning see the following article, in particular the Agile BI Video series which are useful in sharing with prospective clients. Also find a link to the official OEID Data Sheet. Step 3: For a more detailed overview there is a live recorded OEID partner webcast with downloadable slides. In conjunction with this, your sales / presales team have free access to the official OEID Partner Playbook as well as the full Oracle price book. Step 4: Download the OEID software and install. Please be aware you will need a 64-bit machine & a 64-bit Operating System. A useful solution for partners that have a 32-bit Operating System is to use Oracle's free VirtualBox software to quickly and easily create a Linux image and install on that. Step 5: Attend a free / subsidised training event in your region. Please join the Knowledge Zone as an Individual (opt in) to be informed of these. We will also publish these via the blog Things are moving fast, so please be aware that the team are working hard to produce more and more material such as downloadable data sets (structured / unstructured), a downloadable image, access to demos, and over the next few weeks we will update this article as soon as new material becomes available!

    Read the article

  • Password protect an alias virtual directory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • PPT Leveraging Azure for Performance Testing

    - by Tarun Arora
    I have recently presented a session on “How you can leverage Azure for Performance Testing” your application.  It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barrier in Load Testing & Performance Testing adoption is the high infrastructure and administration cost that comes with this phase of testing. In the session I tried to demonstrate how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. You can view the session presentation here, http://www.slideshare.net/aroratarun/leveraging-azure-for-performance-testing  I’ll be adding a video on this subject shortly… If you have any feedback or further suggestions to add to the goodness of this solution please get in touch.

    Read the article

  • Running a Screen instance of a program as non-root

    - by user288467
    I've got a dedicated server (Ubuntu 12.04, no GUI) set up to launch an instance of McMyAdmin and attach it to a screen instance every time I reboot the hardware. I have the command saved to root's crontab as: @reboot cd /var/MC_SVR && screen -dmS McMyAdmin ./MCMA2_Linux_x86_64 Problem being, though, I have a user set up specifically for FTP access to the server files so I don't always have to SSH into the machine. Since the server is being started as a root process, all the files it makes are, obviously, set with root as the owner. So I chown'd all the files and set them to ftpuser. Now I'm stuck with trying to get the process to start as ftpuser. I've tried doing the following but to no avail: cd /var/MC_SVR && su ftpuser - -c 'screen -dmS McMyAdmin ./MCMA2_Linux_x86_64' I try this in terminal and I get no errors or anything (in fact I never get anything unless it's a syntax error from su), but there's no screen instance to access and so I can assume the server never starts. So, what am I doing wrong? Or am I just not accessing the screen instance correctly since it's (supposed) to be launched by another user?

    Read the article

  • USB flash module giving errors

    - by vshenoy
    Hi, I have a SATA USB flash module which was earlier running a 2.4 linux kernel (2.4.36.6) and on which now I am trying to install ubuntu server 10.04.1 LTS. I have two such USB flash modules and on one of them the installation process itself giving these errors: sd 4:0:0:0 [sda] Device not ready sd 4:0:0:0 [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 4:0:0:0 [sda] Sense Key : Not Ready [current] sd 4:0:0:0 [sda] Add. Sense: Medium not present sd 4:0:0:0 [sda] CDB: Write(10): 2a 00 00 05 48 02 00 00 04 00 end_request: I/O error, dev sda, sector 46114 usb 1-1: reset high speed USB device using ehci_hcd and address 2 Buffer I/O error on device sda1, logical block 172033 lost page write due to I/O error on sda1 Buffer I/O error on device sda1, logical block 172034 lost page write due to I/O error on sda1 on the other the installation is successful, but after a day or two of running the machine hangs because of kernel spewing these messages: Remounting filesystem read-only EXT2-fs error (device sda1): read_block_bitmap: Cannot read block [bitmap - block_group = 105, block_bitmap = 860161] EXT2-fs error (device sda1): ext2_get_inode: unable to read inode block - inode=13083, block=24683 ext2_free_inode: bit already cleared for inode 83966 and the machine needs to be hard rebooted. On both the systems SCSI emulation with usb_storage driver is being used to detect the module. Here is the output of /proc/scsi/scsi on 2.4: # cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 02 and on 2.6: # cat /proc/scsi/scsi Attached devices: Host: scsi6 Channel: 00 Id: 00 Lun: 00 Vendor: TS Model: UFM Rev: 1100 Type: Direct-Access ANSI SCSI revision: 00 i.e. only 'ANSI SCSI revision:' is shown as different, although I am not sure if this can cause any problem. Really appreciate if someone can point as to how to debug this issue or any mailing list where I can further ask questions about this.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >