Search Results

Search found 20155 results on 807 pages for 'things'.

Page 348/807 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • How to detect when a user copies files from a server over the network?

    - by Mr. Graves
    I have a few virtual servers + desktops that are used for shared development with remote users, including some consultants. Each user has an account with access to most aspects of the server. I don't want to prevent people from being productive, or track passwords or read emails, but I do want to know when and what files they copy from the virtual server or what they upload from the server to a remote site, and what if any applications they install. This will help make sure my IP is protected, that no one is installing tools they shouldn't, and that things are licensed appropriately. What is the simplest way to do this? In order of importance I would say detecting file transfers off the machine to be most critical. Thanks

    Read the article

  • How to back up a network volume to my Time Capsule?

    - by Mike
    I have a Time Capsule that I'm using for my backups. I have a network volume (coincidentally on the same time capsule) that I'd like to back up as well. How can I tell Time Machine to back up network volumes in addition to my main laptop hard drive? PS: yes, I know this setup isn't ideal. It'll incur 2x network overhead when backing up the network volume, plus my data won't be safe in the event of a drive failure since both copies will be on the same disk. However, it will give me some small amount of safety in the event I accidentally delete files on the network volume, among other things.

    Read the article

  • How to start/stop iptables in Ubuntu 12.04?

    - by imwrng
    I am using Ubuntu 12.04 . while learning some new things about iptables i cant through this . see at the image . while i am trying to start ,its saying as root@badfox:~# iptables -L -n -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination root@badfox:~# service iptables stop iptables: unrecognized service root@badfox:~# service iptables start iptables: unrecognized service Source: http://www.cyberciti.biz/tips/linux-iptables-examples.html Why i am getting like this ? EDIT: So my firewall already started but why i am not getting the output as i mentioned in the link at source link in first workout. . Here is my output root@badfox:~# sudo start ufw start: Job is already running: ufw root@badfox:~# iptables -L -n -v Chain INPUT (policy ACCEPT 4882 packets, 2486K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5500 packets, 873K bytes) pkts bytes target prot opt in out source destination root@badfox:~#

    Read the article

  • What is the point of dynamic allocation in C++?

    - by Aerovistae
    I really have never understood it at all. I can do it, but I just don't get why I would want to. For instance, I was programming a game yesterday, and I set up an array of pointers to dynamically allocated little enemies in the game, then passed it to a function which updates their positions. When I ran the game, I got one of those nondescript assertion errors, something about a memory block not existing, I don't know. It was a run-time error, so it didn't say where the problem was. So I just said screw it and rewrote it with static instantiation, i.e.: while(n<4) { Enemy tempEnemy = Enemy(3, 4); enemyVector.push_back(tempEnemy); n++; } updatePositions(&enemyVector); And it immediately worked perfectly. Now sure, some of you may be thinking something to the effect of "Maybe if you knew what you were doing," or perhaps "n00b can't use pointers L0L," but frankly, you really can't deny that they make things way overcomplicated, hence most modern languages have done away with them entirely. But please-- someone -- What IS the point of dynamic allocation? What advantage does it afford? Why would I ever not do what I just did in the above example?

    Read the article

  • Sprite Sheets in PyGame?

    - by Eamonn
    So, I've been doing some googling, and haven't found a good solution to my problem. My problem is that I'm using PyGame, and I want to use a Sprite Sheet for my player. This is all well and good, and it would be too, if I wasn't using a Sprite Sheet strip. Basically, if you don't understand, I have a strip of 32x32 'frames'. These frames are all in an image, along side each other. So, I have 3 frames in 1 image. I'd like to be able to use them as my sprite sheet, and not have to crop them up. I have used an awesome, popular and easy-to-use game framework for Lua called LÖVE. LÖVE has these things called "Quads". They are similar to texture regions in LibGDX, if you know what they are. Basically, quads allow you to get parts of an image. You define how large a quad is, and you define parts of an image that way, or 'regions' of an image. I would like to do something similar to this in PyGame, and use a "for" loop to go through the entire image width and height and mark each 32x32 area (or whatever the user defines as their desired frame width and height) and store that in a list or something for use later on. I'd define an animation speed and stuff, but that's for later on. I've been looking around on the web, and I can't find anything that will do this. I found 1 script on the PyGame website, but it crashed PyGame when I tried to run it. I tried for hours trying to fix it, but no luck. So, is there a way to do this? Is there a way to get regions of an image? Am I going about this the wrong way? Is there a simpler way to do this? Thanks! :-)

    Read the article

  • Reduce munin logging level

    - by petrus
    Munin is quite verbose, and logs a bunch of things into munin-graph.log, munin-html.log, munin-limits.log and munin-update.log at each run of munin-cron. I already reduced munin-node logging level by setting log_level 0 in munin-node.conf, and that works well. munin-node.log only gets updated when an error message is generated. However I also tried to add the same option in munin.conf, but it makes munin crash. How one can reduce the amount of logs written by munin?

    Read the article

  • Starting out with OpenGL when most tutorials are out of date

    - by AUTO
    I'm sure there are already a bunch of questions like this asked, but the constant updating of the OpenGL library throws them all away, and in a month or two, the answers here will be worthless again. I am ready to start programming in OpenGL using C++. I've got a working compiler (DevCpp; do NOT ask me to switch to VC++, and don't ask me why). Now I'm just looking for a solid tutorial on how to program with OpenGL. My assistant found the tutorial provided by NeHe Productions, but as I've come to find out, it's WAY OUT OF DATE! (although I did pull together a basic window to support an OpenGL canvas) Then I went online, and found the OpenGL SuperBible, which apparently uses freeglut? But what I'd like to know is whether or not SuperBible 5th edition is up to date any longer. The suggestion to freeglut I found said the latest version was 2.6.0 but now it's 2.8.0! Is the OpenGL SuperBible still a good, and fairly up-to-date place to start? Is there a better place to go to learn OpenGL? Am I allowed to simply store freeglut in the DevCpp include directory (maybe in GL), or is there some important procedure? Are there any comments or suggestions that I didn't think to ask since I'm only just beginning? @dreta cleared some things up for me, so now I have a better idea of what to ask: I think I'd like to start out with OpenGL using a wrapper library instead of directly accessing OpenGL.I just think that, for a beginner, it would be easier for me to program and get good results, while I don't yet have to understand all the grimy details (as @stephelton mentioned). The problem is, I can't find any library that doesn't have undefined references to no longer supported functions. Freeglut sounds operational, but it still uses GLU.Does anyone know what I can do?Also, I tried compiling the first SuperBible's source, but I got errors since GLAPI is not being defined as a type, the error originating in the GLU library. I'd like to use the SuperBible, but I don't know how to fix this.

    Read the article

  • Cannot install Ubuntu on an Acer Aspire One 756

    - by Byron807
    I have used Ubuntu before, in virtual machines, but today I decided to make the leap and I bought a netbook to install Ubuntu as a "real" OS alongside Windows. The netbook I bought is an Acer Aspire One 756, with a 64-bit Intel processor, 4GB RAM, and Windows 8 as the default OS. I have now encountered several obstacles that actually prevent me from installing Ubuntu 12.10. Here are all the things I have tried so far: Used a live CD, in combination with a USB DVD drive. (I should point out that the Aspire One does not have an optical drive.) The computer does not boot in Ubuntu; the drive keeps spinning, but nothing happens, even though I changed the boot order in the BIOS. Used a USB drive created via the tool available on pendrivelinux.com. Again, I've made changes to the BIOS to make sure the computer tries to boot from USB before using the built-in HDD. The results vary in this case: sometimes, the computer keeps rebooting like crazy until I remove the USB drive, at which point the computer boots into Windows 8, as expected. If I use a different USB drive, I get an error message that says that the USB drive has been blocked due to "the current security policy". Tried to install Ubuntu via Wubi. The program appears to install something, but at some point during the installation process, I get a non-specified error message and nothing else happens. I am not sure if these are known issues; in any case, searching the forum has not yielded any results, so I thought I should simply describe my problem here in the hope that this question has not been answered before. I would greatly appreciate any help with this annoying problem. Of course, if anything is unclear, do not hesitate to ask for further details.

    Read the article

  • Wessty: Live with HTML 5 (2011 Speaker Tour)

    - by David Wesst
    That’s right: Wessty is on tour. Okay, the banner and the tour is a little over the top, but I am really excited about my upcoming speaking engagements to spread the word about HTML 5! I have already kicked off the tour with the Winnipeg Code Camp last weekend with the world premiere of HTML 5 for .NET Pro presentation, and the turn out fantastic. It was the last presentation of the day, but we still had some great questions about the new standard and got to see how HTML 5 can fit into .NET web applications today. In any case, above you can see the confirmed presentations that I will be doing so far in 2011, but there are a few more events that I have heard about that I hope to add to that list. Ultimately, expect that list to be updated over the course of the year as the year is young and there are plenty of conferences coming up! Presentation Resources As the tour continues, I will be posting the slides and the source code for the demonstrations up here on my site. They will be free of charge and give you the chance to review the demos and hopefully take advantage of some of the cool things you see in the presentations. Become part of the Tour If you are considering hosting an event where you think that HTML 5 could use a voice, drop me a line and let me know. I am always looking for opportunities to grow the tour to talk not just about HTML 5, but a variety of topics that relate to user interface and user experience development. This post also appears at http://david.wes.st

    Read the article

  • Coda-like experience for Ubuntu

    - by Dillon Gilmore
    I'm a web developer who's going to transition from using Mac OS X to Ubuntu. I've been using Coda for some time, only because it makes web development easy. I know a full fledged app isn't available for Linux, but would like to know about apps that specialize in the same tasks that Coda offers. I plan on switching to Vim for code editing, I'm extremely proficient and will install the Janus plugin and be good to go for editing code. One thing that makes editing on Coda so amazing is its extremely good at SFTP, you can drag and drop files and/or folders from your local drive to the server. Also, you can edit code directly on the server. The problem here, is that using Vim I don't know of a way to edit code on a remote server, while using my own Vim settings and plugins. To solve this, I would like to know of a good SFTP client OR a good SFTP CLI. A CLI that could synchronize your files after a file has been modified would be perfect, but not necessary. Now, one of the biggest and best features of Coda is its ability to view your databases. You get to create a database, create tables, add stuff, delete stuff and view the contents of the table (all this without writing a single SQL statement). I will admit that databases are my weak point, but is a very important part of my job. If there is a tool that specializes in databases would be perfect. I wouldn't prefer to use the command line for database stuff, but if there is a CLI for databases that I'm missing could potentially be useful. So I guess I'm asking for two things. A tool that makes databases easier to visualize and a tool that assists in pushing my local code to a server.

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • Essbase Excel Add in - S.o.D.

    - by THE
    #cross { font-size: 72pt; } sadly another long lasting friend is about to be buried in the wet, cold data void that holds past programs (... and AOL CDs). The Essbase Excel Add In is about to be de-continued (see  Doc ID 1466700.1) in January '13. The (already out) version 11.1.2.2.x of the Excel Add In must be considered the last release of this particular program (Unless the guys from Applied OLAP bring out their own version next to the openOffice Add In that they already sport). As expected, SmartView achieved parity in functionality with Release 11.1.2.1.102 and ever since then it was just a question of time when our old buddy would get the shoe. For all users out there like me that have known and worked with the Excel Add In for the last decade(s) this is a loss. SmartView may have functionality parity, and may altogether be the stronger, open technology - capable of Planning forms, connection to HFM etc. .But (from my personal point of view) it will not give the end user the same direct access to his databases, with nothing between him and his Essbase Server. Of course it was to be expected that only one of the two could survive and it was obvious that this would be SmartView, so this does not come as a surprise. Still.A minute for an old friend . . . . . . Thank you, and let us look forward! Unless you had other plans for the upcoming season, why not spend it investigating SmartView for your Essbase interaction needs. We hear that the days between Christmas and new year hold unlimited potential to test out new things. Or take it as a new year resolution: "I will switch to SmartView at the earliest possible moment".

    Read the article

  • Canon MG6100 series USB printer receives job but doesn't physically print

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0

    Read the article

  • Too much free space on FreeNAS - ZFS

    - by Guillaume
    I have a FreeNAS server with 3 x 2 To disks in raidz1. I would expect to have about 4 To of space available. When I run zpool list I get: [root@freenas] ~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT main_volume 5.44T 3.95T 1.49T 72% ONLINE /mnt I was expecting a size of 4 To. Also, used space as reported by zpool list does not match what's reported by du: [root@freenas] ~# du -sh /mnt/main_volume/ 2.6T /mnt/main_volume/ There are quite a few things that I dont yet completely understand about ZFS. But at the moment I am mostly worried that I misconfigured my system and that I dont have any storage redundancy. How can I make sure I did not do an horrible mistake ... For the sake of completeness, here is the output of zpool status: [root@freenas] ~# zpool status pool: main_volume state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM main_volume ONLINE 0 0 0 raidz1 ONLINE 0 0 0 gptid/d8584e45-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d8f7df30-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 gptid/d9877cc3-5b8a-11d9-b9ea-5404a6630115 ONLINE 0 0 0 errors: No known data errors

    Read the article

  • opening offline sync files from a .CAB file

    - by Rob
    OK, I have downloaded from Windows Live Spaces (don't know if this is useful, but might be) a .CAB file containing an Index.XML file and package.cab, package01.cab through to package12.cab. The index.XML simply has names of all the subsequent package.cab files and their offsets. The first package.cab has a single 26MB XML file which appears to be an OfflineSyncFile definition which I am guessing is the meta data for all the other packageXX.cab files. Now the question I have is how should i be going about extracting these things and piecing it all back together again. I have tried WinRAR, which extracts all 800MB for me into unnamed files and randomly named directories. I have also tried the standard extract in Windows Explorer with much the same resusts.

    Read the article

  • Option and command keys in OSX are swapped and keyboard preferences do not set them back.

    - by bikesandcode
    On my MacBook pro, I occasionally use external keyboards, generally Windows ones and things have been fine. Yesterday, I plugged in a new one, remapped the command/option keys so the windows/alt keys were in the same configuration, again, nothing new here. However, this time when I unplugged the USB keyboard, the laptops option/command keys remained switched. More annoying is that if I go into the System Preferences - Keyboards - Modifier keys, remapping the keys to actions does not work. I can use the drop downs to disable any specific keys, but switching the behaviours does nothing. (Cmd/Option obvious, tried remapping anything to caps lock and a few other combinations, no joy. Restore defaults set the configuration to what I'd expect, but the settings are evidently ignored.) So: Any ideas?

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Clustering and custom applications

    - by Ahmed ilyas
    I was not entirely sure what tags to put but hope this is ok. This is just a general question in regards to clustering and applications: so lets say we have a clustered environment setup. We cluster SQL Server (I dont know exactly how its done but lets just say its been done for the sake of argument). Now if a website or application is trying to access that database for read/write (say an ASP.NET app or a C# Winforms app) and during that time SQL goes down - it takes a couple of minutes for the clustering failover to take affect to switch to another node. What happens during this time? I think it will time out/unable to connect. BUT is there a way for it to place the request in some pipeline so when the cluster node is back up/switched over it will continue as normal? as you can see, I know nothing much about clustering! what about your own custom .NET apps? Would there be a special way to develop them? I know that you can say create a simple Hello world app, and cluster that but they wouldnt be something you could see interms of the UI or anything, so they would effectively need to be developed as a Windows Service perhaps or even as a standard Console app which runs and not wait for user input but you wouldnt see any output from it (unless you redirect output to somewhere else) What im getting at here is... for those who have experience or developed a cluster application in .NET, how did you do it and what are the things to be aware of? For example we have the cloud service - fundamentally its built on clustering - if there is an outage, another node takes place and service is resumed as normal but we dont really see much of that downtime.

    Read the article

  • Which group memberships are necessary for simple users in Ubuntu 12.04?

    - by Joey Carson
    I'm configuring Ubuntu 12.04 for my sister. I'd like to give her a system that she really can't screw up, but can still do normal things like install software. I don't want to just add her user to /etc/sudoers so that she can become root because she could possibly mess something up. I know that I should be able to get around this by just adding her to the necessary groups, but I'm not sure which ones those should be. Could anyone suggest them or point me in the direction of some kind of list that heavily used software in Ubuntu requires group membership?

    Read the article

  • Recommendations for a JetDirect print server for USB 2.0 printers?

    - by eleven81
    I have been using some older HP JetDirect 300x print servers for a variety of parallel printers over the years. These things work great for every printer I have tried them with, including HP's, Dell's, and even a Mountbatten braille embosser! These have been a boon for printers whose internal network cards fail, but whose parallel ports continue working. I don't have to throw away the $500 printer that is one year and a week old, and can keep using it for many, many years. Now that very few printers are coming with parallel ports, but are coming solely with USB connections and network cards. When the network card fails but the printer is still usable, I want to continue using it on the network with a JetDirect card. In summary: Does anyone have any recommendations for JetDirect cards that will work as well with USB 2.0 printers of unspecified manufacturer that my old JetDirect 300x cards do?

    Read the article

  • ZFS: RAIDZ versus stripe with ditto blocks

    - by RandomInsano
    I'm going to build a ZFS file server from FreeBSD. I learned recently that I can't expand a RAIDZ udev once it's part of the pool. That's a problem since I'm a home user and will probably add one disk a year tops. But what if I set copies=3 against my entire pool and just throw individual drives into the pool separated? I've read somewheres that the copies will try and distribute across drives if possible. Is there a guarantee there? I really just want protection from bit rot and drive failure on the cheap. Speed's not an issue since it'll go over a 1Gb network and at MOST stream 720p podcasts. Would my data be guaranteed safe from a single drive failure? Are there things I'm not considering? Any and all input is appreciated.

    Read the article

  • How to remove uninstalled programs from Notification Area

    - by ekaj
    For some reason I had 2 instances of Apache running, and I have no idea how. I also had two instances of ApacheMonitor.exe showing in the "Notification Area Icons" place when you right click on your taskbar and go to properties. To fix the multiple instance issue, I deleted Apache completely and uninstalled the service (I did not use the .msi, I install from a .zip). Anyways, Everything from Apache is gone except the two things in the Notification Area Icon page. Does anyone know how to remove these two icons? I have completely uninstalled Apache, cleaned my registry with CCleaner, and rebooted. Does anyone have any other suggestions? Despite what the picture says, I do NOT have Apache installed, and it is not running.

    Read the article

  • PHP - Making CMS (architecture, etc.)

    - by UnknownProgramer
    I'm in the stage of planning new CMS. Before I used WordPress and other open source CMS for my clients, but I always had to write new modules and even mess with the code in order to do certain things. Which as you understand is not the best thing to do. So I finally decided to make my own CMS to work with, the way I need. But before I start it, I would like to think it trough carefully to ensure that I won't need to rewrite it ground up, just because I forgot to include some feature into architecture or did it wrong. I would like to hear your thoughs and the most important I would like you to suggest me some articles or books on that subject, especially on architecture of such systems. I googled a few good books, but that is not enough. The way I'm planning to do it: PHP5, completely OOP, modules architecture. You make a page and add any modules you need there, but modules are not global, but local to a page so you can make two pages with the same module, but content will be different if you set different "content ID" for these two entities. But it can be set the same, so two pages has the same content of the modules put there. Also I plan to support online storage web service (like amazon S3) for images and files, so I would like to hear your thoughs on it too. Also I have not yet decided how to store language data. I don't want to use DB for that, but I haven't decided yet. Also I think I will support other DB with global DB class and separate DB wrappers for MySQL and other databases. And, well, I would appreciate any other information you can provide for that subject.

    Read the article

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >