Search Results

Search found 28603 results on 1145 pages for 'active users'.

Page 123/1145 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Warning that users must handle

    - by hagaik
    I am looking for a way to set warning that the user will have to respond. In a sense I would like to use late exception mechanize that occur after the function already finished executing.and returned the wanted value. SomeObject Foo(int input) { SomeObject result; // do something. oh, we need to warn the user. return result; } void Main() { SomeObject object; object = Foo(1); // after copy consturctor is done I would like an exception to be thrown }

    Read the article

  • How do I track users(clients) in a REST GET calls

    - by PythonKing
    We have a Public REST application which has a lot of GET's from the clients . We have a way to track the POST calls but we do not have a way to track where the user has come for the GET calls . Our intention is to have some client specific business rules if we are able to decide where the call has come from ?

    Read the article

  • jquery selection with .not()

    - by Yako
    Hello, I have some troubles with jQuery. I have a set of Divs with .square classes. Only one of them is supposed to have an .active class. This .active class may be activated/de-activated onClick. Here is my code : jQuery().ready(function() { $(".square").not(".active").click(function() { //initialize $('.square').removeClass('active'); //activation $(this).addClass('active'); // some action here... }); $('.square.active').click(function() { $(this).removeClass('active'); }); }); My problem is that the first function si called, even if I click on an active .square, as if the selector was not working. In fact, this seems to be due to the addClass('active') line... Would you have an idea how to fix this ? Thanks

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Moving users folder on Windows-7 to another partition - bad idea?

    - by Donat
    Hi, I'd like to re-submit here a question posted by Benjol on Aug 17at 5:57 "Moving users folder on Windows Vista to another partition - bad idea?" (I can't post one than one link until I earn "10 reputation" and removed my "answer" there to post my follow-up questions here). I am anxiously getting ready at long last to to carry out a clean install (using custom install option) from Vista to Windows-7 Home Premium 64bit with the free upgrade I received late October. For my Vista system I successfully set-up last Summer a multi-partitions scheme with Users and Program Data on a a different partition than the operating system (see link below, and its subsequent links in my comment for details). http://tuts4tech.net/2009/08/05/windows-7-move-the-users-and-program-files-directories-to-a-different-partition/comment-page-1/#comment-562 I was planning a similar set-up for windows 7, a little more streamlined, with OS, Program Files on C:, Users and Program Data on D:, and TV media recording on a separate partition. Reading the Question submitted by Benjol, I am second guessing too. Is moving Users and Program Data on a different partition than the default primary partition with OS and Program Files such a good idea? The couple of people I talked to at the official Microsoft Windows 7 booth at CES 2010 gave the same answer to the intention of moving the Users profile folder to another partition. In a nutshell, they all told me that they used to do this in XP and less in Vista but not anymore with Windows 7... "It is stable, after two months still no problem" I had the feeling it was a scripted answer to emphasize how Windows 7 is so stable and efficient... (Will Windows-7 system not become bugged down over the course of several months to a year or two? Only time will tell) Long story short, I share the same view than Benjol expressed with respect to being "able to backup and restore system and user data independently." I just received a 2TB usb2, eSATA external hard drive as a back-up drive, which includes NTI Shadow 4 (4.1.0.150) for back-up solution. I took note of the issue with NTUSER.DAT and I will read more about Volume Shadow Copy Service (VSS) for Windows 7. I am willing to put the effort if placing Users and Program Data on a different partition would allow to restore a fresher OS+Program image when the system gets bugged down. Questions: Is it such a bad idea? What is the "easy route" referred by Benjol in his post? Is it to just relocate folders to another partition using the Folder property tool? (It is not practical for several users and might not provide a straightforward restore process of just OS and Program Files when needed.) I am starting to learn about Windows 7 libraries. Would Windows 7 libraries be another alternative to achieve this? All this reading to decide how to organize the partition scheme for my custom system is starting to be confusing. I apologize for this lengthy Question. It is my first day here on SuperUser and I am just learning how different from a discussion thread it is. Thank you in advance for all your suggestions and comments. Donat

    Read the article

  • Name of the concept of designing an interface to allow expert users to become more efficient?

    - by Grundlefleck
    I'm searching for sources and further information on a particular concept in user experience design. It's not a particularly complicated concept, just that when designing user interfaces, you should both make it intuitive and simple for new users, but also provide way for users to become more efficient as they become more familiar with the application. An example could be including a prominent button for a common action for new users, but also providing a keyboard shortcut / mnemonic for expert users. However, that's just an example, another example could be providing full functionality through a GUI, but allow expert users to script the same actions. The point is it's more difficult to learn, but it makes them more efficient. I'm pretty sure there's a name for that which I can't recall, and I'm having trouble searching for sources and references on it. Name of the concept of designing an interface to allow expert users to become more efficient?

    Read the article

  • How to deploy and secure an ASP.NET web app to be available to internal and outside users?

    - by Swoop
    My company has several web applications written in ASP.NET. We need to make these applications available to Intranet users as well as authenticated external users. Most of the features are the same for the two groups, though there are some extra features available to the Internal users. The two different sets of users would use a slightly different security setup... our internal people will be authenticated using LDAP against Exchange, whereas the external users will have accounts in SQL Server. What is the best approach for deploying our web apps? Should we deploy 2 copies to different servers, one configured for an Intranet and one for outside users? Or is there a better way to share the code between the 2 servers, yet have the flexibility to use different web.config settings for security??

    Read the article

  • Google active la recherche en temps réel en France, en y incluant les flux des réseaux sociaux

    Google active la recherche en temps réel en France, en y incluant les flux des réseaux sociaux Un peu plus de trois mois après sa mise en service dans la version américaine du moteur de recherche, la recherche en temps réel débarque sur Google.fr aujourd'hui. Cette fonction, qui se décline désormais pour les francophones, permet d'obtenir des résultats de recherche mêlant à la fois les titres des journaux, mais aussi les blogs, tweets, et autres changements de statuts sur les sites communautaires. Les réponses données par le moteur de recherche peuvent être classées par ordre chronologique, et prennent désormais en compte les contenus de Facebook, Twitter, MySpace et FriendFeed. L'utilisateur peut, a...

    Read the article

  • How can I format my active hard drive to NTFS?

    - by Ghost
    Believe it or not, I'm not too happy with Ubuntu. Well, let me rephrase that. I like it, but the only thing I don't like about it is that it's too much of a hassle to get a game to work. I'm trying to install Windows 7 with a 4GB flash drive, but my error that comes up is that my hard drive I'm trying to install on is in ext4. I need to format it to read NTFS. I can't seem to find any topics on how to format an active hard drive. I found a topic that explains how to move Ubuntu to a new drive, but it's a bit confusing to me. Please help! (Please don't disregard this topic just because I want to go back to windows)

    Read the article

  • Can I set up a hostname alias that is only active in a specific network?

    - by denisw
    I have a small ARM box running ownCloud in my house, and a domain bound to my DSL router with Dynamic DNS. This works well as I can configure the ownCloud client to use the domain name, which works both in the house and when I'm on the road. However, at home I would like to let my laptop and my server talk to each other directly on the local network, rather than as if they were talking over the Internet (including DNS resolution for my domain name, routing all traffic through the DSL machinery etc.). I know that I can set a hostname alias for my domain in /etc/hosts - the problem is that I want this alias to only be active when I'm in the home network, while in all other cases DNS resolution should be used as normal. Is this possible?

    Read the article

  • Dot Not Track ne doit pas être activé par défaut pour le W3C, Microsoft campe sur sa position et maintient son activation dans IE 10

    Dot Not Track ne doit pas être activé par défaut pour le W3C Microsoft campe sur sa position et maintient l'activation de la fonction dans IE 10 Mise à jour du 20/06/2012, par Hinautl Romaric Microsoft a opté, pour Internet Explorer 10, pour l'activation par défaut de la fonctionnalité Dot Not Track (DNT). Le but étant de protéger la vie privée des utilisateurs, la fonction Dot Not Track rajoute une entête HTTP visant à désactiver le suivi des utilisateurs par les applications web, utile pour effectuer de la publicité ciblée. Le choix opéré par Microsoft a entrainé une vague de critiques des annonc...

    Read the article

  • Google active la réponse et l'update partielle dans son Data protocol, pour faciliter la mise à jour

    Google active la réponse partielle et l'update partielle dans son Data protocol, pour faciliter la mise à jour de ses APIs Dans le but de booster la rapidité de ses APIs, Google Data Protocol (qui permet aux développeurs d'écrire des applications liées avec les données contenues dans les produits Google) s'est vu attribué deux nouvelles fonctionnalités cette semaine, au stade expérimental : la réponse partielle, et l'update partielle. Conjointement, ces deux fonctions "peuvent significativement réduire les ressources consommées par le réseau, la mémoire et le CPU" dont on a besoin pour travailler avec les APIs de Google. Pour expliquer le rôle de la réponse partielle, l'équipe du Google Data Protocol donne ...

    Read the article

  • Windows?????(C#?Visual Basic?Active Server Pages?Visual C++)??Oracle Database?????

    - by Yusuke.Yamamoto
    C#?Visual Basic?Active Server Pages?Visual C++???????? Windows ???????????????Oracle Database ????·???????????????? ???? Windows ?????????????Oracle Database ??????????·????·???????????? .NET Oracle Data Provider for .NET Oracle Provider for OLE DB ???? OLE DB.NET Oracle ODBC Driver ???? ODBC.NET Visual Basic/ASP/Access Oracle Objects for OLE(OO4O) Oracle ODBC Driver Oracle Provider for OLE DB OO4O?OLE DB?ODBC?ODP.NET ??Oracle Data Access Component(ODAC) ???????????? OTN Oracle Data Access Components ??????? ???? ????????Oracle DB????????????? Oracle on Windows / .NET ?? ?????????? .NET|???????????

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • What kind of server hardware is roughly necessary to serve website to 10k users?

    - by jcmoney
    I've been looking at VPS's and the specs they offer for entry level setups seems somewhat surprising to me. I'm am new to this topic but many of VPS offer less than 512MB of memory and my laptop has 4GB of memory so I am curious what does it actually take in terms of hardware to serve say 10k users (say 5k daily active users)? I figure a large number of factors can probably sway this a lot but just for benchmarking, say the site is a social networking site written in php using mysql + apache that's not really doing anything unusual like serving lots of media. So essentially a very basic Facebook minus the absurd number of photos and videos. What about 100k users (50k daily active)? 1 million (500k daily active)? Thanks in advance.

    Read the article

  • Jquery Automatic Image Slider w/ CSS & jQuery

    - by Jacinto
    This is Automatic Image Slider w/ CSS & jQuery by Soh Tanaka I am trying to customize it to show .desc when the mouse hover overs the slider but it does not seem to work any help? //Set Default State of each portfolio piece $(".paging").show(); $(".paging a:first").addClass("active"); //Get size of images, how many there are, then determin the size of the image reel. var imageWidth = $(".window").width(); var imageSum = $(".image_reel ul.examples").size(); var imageReelWidth = imageWidth * imageSum; //Adjust the image reel to its new size $(".image_reel").css({'width' : imageReelWidth}); //Paging + Slider Function rotate = function(){ var triggerID = $active.attr("rel") - 1; //Get number of times to slide var image_reelPosition = triggerID * imageWidth; //Determines the distance the image reel needs to slide $(".paging a").removeClass('active'); //Remove all active class $active.addClass('active'); //Add active class (the $active is declared in the rotateSwitch function) //Slider Animation $(".image_reel").animate({ left: -image_reelPosition }, 500 ); }; //Rotation + Timing Event rotateSwitch = function(){ play = setInterval(function(){ //Set timer - this will repeat itself every 3 seconds $active = $('.paging a.active').next(); if ( $active.length === 0) { //If paging reaches the end... $active = $('.paging a:first'); //go back to first } rotate(); //Trigger the paging and slider function }, 7000); //Timer speed in milliseconds (3 seconds) }; rotateSwitch(); //Run function on launch //On Hover $(".image_reel").hover(function() { clearInterval(play); //Stop the rotation }, function() { rotateSwitch(); //Resume rotation }); //Hide the tooglebox when page load $(".desc").hide(); //slide up and down when hover over heading 2 $(".image_reel").hover(function(){ // slide toggle effect set to slow you can set it to fast too. $(this).next(".desc").slideToggle("slow"); return true; }); //On Click $(".paging a").click(function() { $active = $(this); //Activate the clicked paging //Reset Timer clearInterval(play); //Stop the rotation rotate(); //Trigger rotation immediately rotateSwitch(); // Resume rotation return false; //Prevent browser jump to link anchor });

    Read the article

  • Adding a link to image breaks jquery slider, help please!

    - by lewisqic
    Hi All, I am working on a site for a friend and trying to add a link to each image in a slide show. The slide show uses jquery and a script called easySlider1.7.js. Everything works just fine if the images are left alone, but when I try and wrap the images with a link, the slides no longer progress as they should. Here is the live website that is displaying the problem... http://www.splits59.com/ Here is the html code for each of the slide images... <div class="covercontent2box"> <div class="covercontent2"> <div id="slideshow"> <a href="/ashton-jacket-p-107.html"><img src="img/homepagesummer2010_2.2-cut.jpg" class="active" /></a> <a href="/carly-coverup-p-106.html"><img src="img/homepagesummer2010_3.2-cut.jpg" /></a> <a href="/shop-online-bottoms-c-15_14.html"><img src="img/homepagesummer2010_1.2-cut.jpg" /></a> </div> </div> </div> And here is the jquery code on the page that controls the slider... <script type="text/javascript"> function slideSwitch() { var $active = $('#slideshow IMG.active'); if ( $active.length == 0 ) $active = $('#slideshow IMG:last'); // use this to pull the images in the order they appear in the markup var $next = $active.next().length ? $active.next() : $('#slideshow IMG:first'); $active.addClass('last-active'); $next.css({opacity: 0.0}) .addClass('active') .animate({opacity: 1.0}, 1000, function() { $active.removeClass('active last-active'); }); } $(function() { setInterval( "slideSwitch()", 4500 ); }); </script> Does anyone have any idea why adding a link tag to the images breaks the slider? Any help or direction provided would be appreciated. Thanks, Devin

    Read the article

  • IE not showing jquery append() images

    - by Johannes Ruuska
    Ok, so i took John Raasch's slideshow script and modyfied it too dynamicly fetch images from folders on the server through ajax. The slideshow work like a charm in FF and Chrome but IE is not showing the images. And since IE's javascript debugging possibilities is close to none I cant figure out what is crashing. Javascript (indenting got messed up when pasting here but you get the code anyway): function slideSwitch() { var $active = $('#box_bildspel IMG.active'); if ( $active.length == 0 ) $active = $('#box_bildspel IMG:last'); var $next = $active.next().length ? $active.next() : $('#box_bildspel IMG:first'); $active.addClass('last-active'); $next.css({opacity: 0.0}) .addClass('active') .animate({opacity: 1.0}, 1000, function() { $active.removeClass('active last-active'); }); } $(document).ready(function(){ $.get('includes/bildspel.php?page='+page, function(r){ var file = r.split('!'); var path = 'pic/bildspel/'+page+'/'; var data = ''; if(file != null && file != ''){ $.each(file, function(key, value){ if(key == 0) { $('#box_bildspel').append('<img src="'+path+value+'" class="active"></img>'); //console.log(path+value); } else { $('#box_bildspel').append('<img src="'+path+value+'"></img>'); //console.log(path+value); } }); } if(file.length > 1){ $(function() { setInterval( "slideSwitch()", 4000 ); }); } }); }); PHP: <?php function getimgs($page) { $path = '../pic/bildspel/'.$page; $files = ''; if ($handle = opendir($path)) { while (false !== ($file = readdir($handle))) { if ($file !== '.' && $file !== '..') { $files .= $file.'!'; } } closedir($handle); echo substr_replace($files ,"",-1);; } } getimgs($_GET['page']); ?> Tested in IE 7 & 8 Any ideas? I have a deadline on this site for tomorrow (april 23) would appreciate VERY much if someone could figure this on out for me, thanks in advance!

    Read the article

  • Linux software raid fails to include one device for one RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64. Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the mdadm.conf file: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions? Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1

    Read the article

  • Upon reboot, Linux software raid fails to include one device of a RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64 (Debian Squeeze). Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the result of $ cat /etc/mdadm/mdadm.conf: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions?

    Read the article

  • Xcode 4 and cocos2D 1.0.0 beta Uncategorized errors and Info.plist doesn't exist

    - by badben
    I just installed the xcode 4 sdk and the cocos2d 1.0.0 beta template. I just created a new project with the cocos2d template. But when I build I got these errors : (for information my previous projects developed with xcode 3 have the same problem) warning: couldn't add 'com.apple.XcodeGenerated' tag to '/Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build': Error Domain=NSPOSIXErrorDomain Code=2 UserInfo=0x201dde680 "The operation couldn’t be completed. No such file or directory" error: unable to create '/Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates' (Permission denied) error: unable to create '/Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Products' (Permission denied) Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build/Debug-iphonesimulator/xcode4.build/Objects-normal/i386 Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/PrecompiledHeaders/Prefix-dflnzjtztxdgjwhistrvvjxetfrg Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build/Debug-iphonesimulator/xcode4.build Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build/Debug-iphonesimulator/xcode4.build Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build/Debug-iphonesimulator/xcode4.build Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build/Debug-iphonesimulator/xcode4.build Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/PrecompiledHeaders/Prefix-fqemzerugrwojibbegzkffljkxqs Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Intermediates/xcode4.build/Debug-iphonesimulator/xcode4.build Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Index/PrecompiledHeaders/Prefix-dbtcglhksokwygezixirqkgfipsr_ast Unable to create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Index/PrecompiledHeaders/Prefix-gdirtpasdqzasnclnkzguimarjpd_ast error: couldn't create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Products/Debug-iphonesimulator/xcode4.app: Permission denied error: couldn't create directory /Users/Benoit/Library/Developer/Xcode/DerivedData/xcode4-bswxazfuwbsguiasyatbtlmvbpps/Build/Products/Debug-iphonesimulator/xcode4.app: Permission denied The file “Info.plist” doesn’t exist. Please help !!

    Read the article

  • Rails User-Profile model challenges

    - by Craig
    I am attempting to create an enrollment process similar to SO's: route to an OpenID provider provider returns the user's information to the UsersController (a guess) UsersController creates user, then routes to the ProfilesController's new or edit action. For now, I'm simply trying to create the user, then route to the ProfilesController's new or edit action (not sure which I should be using). Here's what I have thus far: Models: class User < ActiveRecord::Base has_one :profile end class Profile < ActiveRecord::Base belongs_to :user end Routes: map.resources :users do |user| user.resource :profile end new_user_profile GET /users/:user_id/profile/new(.:format) {:controller=>"profiles", :action=>"new"} edit_user_profile GET /users/:user_id/profile/edit(.:format) {:controller=>"profiles", :action=>"edit"} user_profile GET /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"show"} PUT /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"update"} DELETE /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"destroy"} POST /users/:user_id/profile(.:format) {:controller=>"profiles", :action=>"create"} users GET /users(.:format) {:controller=>"users", :action=>"index"} POST /users(.:format) {:controller=>"users", :action=>"create"} new_user GET /users/new(.:format) {:controller=>"users", :action=>"new"} edit_user GET /users/:id/edit(.:format) {:controller=>"users", :action=>"edit"} user GET /users/:id(.:format) {:controller=>"users", :action=>"show"} PUT /users/:id(.:format) {:controller=>"users", :action=>"update"} DELETE /users/:id(.:format) {:controller=>"users", :action=>"destroy"} Controllers: class UsersController < ApplicationController # generate new-user form def new @user = User.new end # process new-user-form post def create @user = User.new(params[:user]) if @user.save redirect_to new_user_profile_path(@user) ... end end # generate edit-user form def edit @user = User.find(params[:id]) end # process edit-user-form post def update @user = User.find(params[:id]) respond_to do |format| if @user.update_attributes(params[:user]) flash[:notice] = 'User was successfully updated.' format.html { redirect_to(users_path) } format.xml { head :ok } ... end end end class ProfilesController < ApplicationController before_filter :get_user def get_user @user = User.find(params[:user_id]) end # generate new-profile form def new @user.profile = Profile.new @profile = @user.profile end # process new-profile-form post def create @user.profile = Profile.new(params[:profile]) @profile = @user.profile respond_to do |format| if @profile.save flash[:notice] = 'Profile was successfully created.' format.html { redirect_to(@profile) } format.xml { render :xml => @profile, :status => :created, :location => @profile } ... end end end # generate edit-profile form def edit @profile = @user.profile end # generate edit-profile-form post def update @profile = @user.profile respond_to do |format| if @profile.update_attributes(params[:profile]) flash[:notice] = 'Profile was successfully updated.' # format.html { redirect_to(@profile) } format.html { redirect_to(user_profile(@user)) } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @profile.errors, :status => :unprocessable_entity } end end end Edit-User View: ... <% form_for(@user) do |f| %> ... New-Profile View: ... <% form_for([@user,@profile]) do |f| %> .. I'm having two problems: When saving an edit to the User model, the UsersController attempts to route to http://localhost:3000/users/1/profile.%23%3Cprofile:0x10438e3e8%3E, instead of http://localhost:3000/users/1/profile When the new-profile form is being rendered, it throws an error that reads: undefined method `user_profiles_path' for # Is it better to create a blank profile when the user is created (in the UsersController), then edit it OR follow the rest-ful convention of creating the profile in the ProfilesController (as I have done)? What am I missing? I did review Associating Two Models in Rails (user and profile), but it didn't address my needs. Thanks for your time.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >