Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 678/1523 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • Multiple servers acting like a single one with all the hardware?

    - by marc.riera
    Hello, by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch. Any advice on wich software to use will be very useful? Thanks

    Read the article

  • Recovering/Creating NewWorld Partition on Mac G4 (PPC) after botched Debian Install

    - by Luis Espinal
    I was trying to install Debian 5.04 on a Mac G4, and in typical geek tradition, I didn't RTFM. During installation, I nuked all existing partitions, creating new to my liking. But as I learned later during the installation process, yaboot needed a NewWorld partition, so I can't boot the installation. I don't have any OSX CDs with me (this is a used G4 I purchased of craigslist) with which to create a HFS partition. I've re-run the Debian installer, which lets me create a partition that is supposed to be of type 'NewWorld', but the installer does not seem to like it or recognizes it. Any ideas how to proceed from here? Thanks.

    Read the article

  • Script for checking the nologin accounts and then disable the account

    - by suma
    "Could you please share the scripts which does the below ?" I have written a script that scans all the relevent logs daily, makes a list of people that have had any activity that day, and maintains database (just a text file) of users and the last time they logged in. Then I have a second script that examines the database for dates more than x days ago, an notifies the user and administrator 2 weeks prior to locking the account. And if there are any dates more than x+y days ago, deletes the account altogether. This seems to be working for me - but I would like to use a non-proprietary solution if one is available. "Could you please share the scripts?"

    Read the article

  • Why does cifs asks for su rights to write any data into it?

    - by Denys S.
    I'm mounting a windows share as follows: sudo mount -t cifs //192.168.178.49/public -o users,username=name,dom=domain,password=pword /mnt/nas Then I'm trying to create a simple file with some basic text: touch /mnt/nas/me.txt And get an error, however, the file is created (contains 0B of data though): touch: cannot touch ‘me.txt’: Permission denied With sudo it works flawless. How can I allow my current user to write data to the share? Is there a mount option?

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • install grub on disk image

    - by Dima
    I have disk image with 2 partitions: Partition 1 has cramfs file system (read only). This partition contains all system files of the OS Partition 2 has ext3 file system. This partition has only configuration files that may be changed. How can I install GRUB1 boot loader on MBR. I tried to copy first 446 bytes of my hard disk and copy GRUB files to the /boot directory on the 1st (cramfs) partition. I cannot use grub-install because I have disk image and not disk itself. Any ideas?

    Read the article

  • If I exchange the CPU, must I reinstall the OSes? (swapping cpus from one *nix-like to another)

    - by dag729
    Hi, as suggested by the title, I want to change CPU: actually I have two computers, one with Ubuntu running on an AMD Athlon 64 dual core 5200+ and the other with FreeBSD running on an AMD Sempron single core LE-1250. I would like to swap (I am not sure that this is the correct term...) the CPUs from one computer to the other one, that is take the dual core from the ubuntu pc and put it inside the freebsd pc and viceversa. The mobo is the same. Do you think I will encounter problems?

    Read the article

  • In Cent os 6.2 can i update Kernel version to 3.4 ? if so how to upgrade kernel?

    - by shiva
    Hi, I have a server with Centos 6.2 with Kernel version 2.6 , but i need to increase my application Performance. The Kernel Version 3.4 has x32abi which can improve the performance so i want to upgrade to 3.4 ? Is it possible? I tried 1) downloading kernel compiling and installing but still i see the same Kernel version.. What went wrong? i followed the process in mentioned in the below link.. http://www.tecmint.com/kernel-3-5-released-install-compile-in-redhat-centos-and-fedora/

    Read the article

  • Installed over 4G RAM on 32-bit OS? [closed]

    - by kai
    Possible Duplicate: 32-bit Windows Server address > 4GB RAM - How? I know that for 32-bit OS, the addressable memory space for each process is "4G" (maybe just 3G in user space...). If I have a 8G RAM, is it correct that all of the processes can still utilize (shared) these 8G memory but each of them are limited to a maximum 4G? Or the whole system only can see and utilize 4G out of 8G and thus having 8G RAM on a 32-bit OS is the same as having 4G RAM on it?

    Read the article

  • How to kill user processes from the same user?

    - by Grey
    I opened a VNC server and my VNC session is suddenly dead. I have lot of xterms open. When I ssh to the machine. and type users I see a bunch of users – my user accounts, like: userA UserA UserA UserA UserA UserA UserA I know I can use pkill -u usersname Since I can only log in as userA, every time I run pkill-u UserA, it will just kill my current session. but other userAs are still there What can I do?

    Read the article

  • TCPDump and IPTables DROP by string

    - by Tiffany Walker
    by using tcpdump -nlASX -s 0 -vvv port 80 I get something like: 14:58:55.121160 IP (tos 0x0, ttl 64, id 49764, offset 0, flags [DF], proto TCP (6), length 1480) 206.72.206.58.http > 2.187.196.7.4624: Flags [.], cksum 0x6900 (incorrect -> 0xcd18), seq 1672149449:1672150889, ack 4202197968, win 15340, length 1440 0x0000: 4500 05c8 c264 4000 4006 0f86 ce48 ce3a E....d@[email protected].: 0x0010: 02bb c407 0050 1210 63aa f9c9 fa78 73d0 .....P..c....xs. 0x0020: 5010 3bec 6900 0000 0f29 95cc fac4 2854 P.;.i....)....(T 0x0030: c0e7 3384 e89a 74fa 8d8c a069 f93f fc40 ..3...t....i.?.@ 0x0040: 1561 af61 1cf3 0d9c 3460 aa23 0b54 aac0 .a.a....4`.#.T.. 0x0050: 5090 ced1 b7bf 8857 c476 e1c0 8814 81ed P......W.v...... 0x0060: 9e85 87e8 d693 b637 bd3a 56ef c5fa 77e8 .......7.:V...w. 0x0070: 3035 743a 283e 89c7 ced8 c7c1 cff9 6ca3 05t:(>........l. 0x0080: 5f3f 0162 ebf1 419e c410 7180 7cd0 29e1 _?.b..A...q.|.). 0x0090: fec9 c708 0f01 9b2f a96b 20fe b95a 31cf ......./.k...Z1. 0x00a0: 8166 3612 bac9 4e8d 7087 4974 0063 1270 .f6...N.p.It.c.p What do I pull to use IPTables to block via string. Or is there a better way to block attacks that have something in common? Question is: Can I pick any piece from that IP packet and call it a string? iptables -A INPUT -m string --alog bm --string attack_string -j DROP In other words: In some cases I can ban with TTL=xxx and use that should an attack have the same TTL. Sure it will block some legit packets but if it means keeping the box up it works till the attack goes away but I would like to LEARN how to FIND other common things in a packet to block with IPTables

    Read the article

  • GLP for Pillar Axiom 600 Storage System Implementation Specialist

    - by uwes
    Now availabe at OPN Competency Center. The guided learning path provides you with an overview of the Pillar Axiom 600 storage system, and the technical details that you need to become a Pillar Axiom 600 Storage System Certified Implementation Specialist.  Learn more, go to: Pillar Axiom 600 Storage System Implementation Specialist.

    Read the article

  • Ubuntu won't boot from USB memory stick

    - by mackenir
    I used the instructions on this webpage to create a bootable USB drive for running Ubuntu 9.10. Unfortunately it doesn't work on my EeePC. Even with 'Removable Dev.' selected in the BIOS as the first boot device, the PC just boots into Windows 7. How do I troubleshoot this problem? The drive is readable and looks like this: Directory of E:\ 28/10/2009 21:14 <DIR> .disk 28/10/2009 21:14 222 README.diskdefines 28/10/2009 21:14 143 autorun.inf 28/10/2009 21:14 <DIR> casper 28/10/2009 21:14 <DIR> dists 28/10/2009 21:14 <DIR> install 28/10/2009 21:14 <DIR> syslinux 28/10/2009 21:14 4,098 md5sum.txt 28/10/2009 21:14 <DIR> pics 28/10/2009 21:14 <DIR> pool 28/10/2009 21:14 <DIR> preseed 28/10/2009 21:14 0 ubuntu 26/10/2009 16:16 1,468,640 wubi.exe 25/02/2010 00:28 2,147,483,648 casper-rw 8 Dir(s) 5,290,307,584 bytes free

    Read the article

  • Improving the performance of JDeveloper11g (part 2) and JVMs in general

    - by asantaga
    Just received an email from one of our JVM developers who read my blog entry on Performance tuning JDeveloper11g and he's confirmed that all of the above parameters are totally supported :-) He's also provided a description of the parameters so we can learn what magic is actually being applied. - -XX:+AggressiveOpts -- this enables the latest and greatest JVM optimizations. It will likely help most Java applications. It's fully supported. The downside of it is that because it has the latest and greatest optimizations, there is some small probability that it may not offer as good of an experience. As those features enabled with this command line option have "matured", they are made the default in a future JDK release. So, you can think of this command line option as the place where the newest optimizations get introduced. Some time later they are moved out from under AggressiveOpts to become default behavior. -XX:+OptimizeStringConcat -- only works with the -server JVM. It may be enabled by the default in a future JDK 7 update release. This option delays the construction of a StringBuilder/StringBuffer and attempts to avoid re-sizing the underlying char[] by attempting to detect the size of the char[] to allocate based on what's being appended to the StringBuilder/StringBuffer. -XX:+UseStringCache -- I would not suggest using this unless you knew that JDeveloper allocated the same string over and over again. And, the string that's allocated over and over again is one of the first 100,000 allocated strings. In short, I'd recommend against using it. And, in fact, in Java 7 (currently) does not include this feature. -XX:+UseCompressedOops -- applicable to 64-bit JVMs. And, if you're using a 64-bit JVM, I'd suggest you use it. It's auto enabled in JDK 7 64-bit JVMs and later JDK 6 64-bit JVMs enable it by default too. -XX:+UseGCOverheadLimit -- by default this option is already enabled. One other command line option to consider is -XX:+TieredCompilation for a JDK 6 Update 25 or later, or JDK 7. This gives you the startup of a -client JVM and the peak performance of a -server JVM. Awesome-ness!  Finally, Charlies also pointed out to me a "new" book he's just published where he goes into the details of JVM tuning, a must for all Fusion Middleware tuning exercises..  (click the book)  Thanks Charlie!

    Read the article

  • xen domUs crashes or unavailability

    - by Rush
    I've xen server with 8 domU. Server is Xeon E31270 with 16gb ram. I think it is enough for 8 machines. Sometimes domU's crashes and i can't figure out the reason. After crash i can connect to console and there is somthing like this: Oct 8 22:20:49 server kernel: [30892.320780] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.320790] Node 0 DMA: 10*4kB 3*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8080kB Oct 8 22:20:49 server kernel: [30892.320817] Node 0 DMA32: 648*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5760kB Oct 8 22:20:49 server kernel: [30892.320842] 1491 total pagecache pages Oct 8 22:20:49 server kernel: [30892.320847] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.320852] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.320858] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.320862] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.324024] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.324024] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.324024] 424467 pages shared Oct 8 22:20:49 server kernel: [30892.324024] 503538 pages non-shared Oct 8 22:20:49 server kernel: [30892.330308] apache2 invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0 Oct 8 22:20:49 server kernel: [30892.330322] apache2 cpuset=/ mems_allowed=0 Oct 8 22:20:49 server kernel: [30892.330330] Pid: 23938, comm: apache2 Not tainted 2.6.32-5-xen-amd64 #1 Oct 8 22:20:49 server kernel: [30892.330337] Call Trace: Oct 8 22:20:49 server kernel: [30892.330349] [<ffffffff810b7180>] ? oom_kill_process+0x7f/0x23f Oct 8 22:20:49 server kernel: [30892.330358] [<ffffffff810b76a4>] ? __out_of_memory+0x12a/0x141 Oct 8 22:20:49 server kernel: [30892.330367] [<ffffffff810b77fb>] ? out_of_memory+0x140/0x172 Oct 8 22:20:49 server kernel: [30892.330376] [<ffffffff810bb59c>] ? __alloc_pages_nodemask+0x4e5/0x5f5 Oct 8 22:20:49 server kernel: [30892.330385] [<ffffffff810cc224>] ? do_wp_page+0x386/0x707 Oct 8 22:20:49 server kernel: [30892.330395] [<ffffffff8100c3a5>] ? __raw_callee_save_xen_pud_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330404] [<ffffffff8100c369>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330412] [<ffffffff810cdfc7>] ? handle_mm_fault+0x7aa/0x80f Oct 8 22:20:49 server kernel: [30892.330422] [<ffffffff8130f906>] ? do_page_fault+0x2e0/0x2fc Oct 8 22:20:49 server kernel: [30892.330433] [<ffffffff8130d7a5>] ? page_fault+0x25/0x30 Oct 8 22:20:49 server kernel: [30892.330439] Mem-Info: Oct 8 22:20:49 server kernel: [30892.330443] Node 0 DMA per-cpu: Oct 8 22:20:49 server kernel: [30892.330450] CPU 0: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330463] CPU 1: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330466] Node 0 DMA32 per-cpu: Oct 8 22:20:49 server kernel: [30892.330469] CPU 0: hi: 186, btch: 31 usd: 0 Oct 8 22:20:49 server kernel: [30892.330472] CPU 1: hi: 186, btch: 31 usd: 60 Oct 8 22:20:49 server kernel: [30892.330476] active_anon:342076 inactive_anon:115398 isolated_anon:0 Oct 8 22:20:49 server kernel: [30892.330477] active_file:268 inactive_file:481 isolated_file:0 Oct 8 22:20:49 server kernel: [30892.330477] unevictable:1125 dirty:2 writeback:13 unstable:0 Oct 8 22:20:49 server kernel: [30892.330478] free:3410 slab_reclaimable:1718 slab_unreclaimable:6946 Oct 8 22:20:49 server kernel: [30892.330478] mapped:899 shmem:113 pagetables:35697 bounce:0 Oct 8 22:20:49 server kernel: [30892.330502] Node 0 DMA free:8036kB min:32kB low:40kB high:48kB active_anon:1144kB inactive_anon:1268kB active_file:8kB inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:11792kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:224kB kernel_stack:16kB pagetables:1228kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330518] lowmem_reserve[]: 0 2004 2004 2004 Oct 8 22:20:49 server kernel: [30892.330523] Node 0 DMA32 free:5604kB min:5708kB low:7132kB high:8560kB active_anon:1367160kB inactive_anon:460324kB active_file:1064kB inactive_file:1916kB unevictable:4500kB isolated(anon):0kB isolated(file):0kB present:2052320kB mlocked:4500kB dirty:8kB writeback:52kB mapped:3600kB shmem:452kB slab_reclaimable:6872kB slab_unreclaimable:27560kB kernel_stack:3528kB pagetables:141560kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:992 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330539] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.330544] Node 0 DMA: 1*4kB 2*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8036kB Oct 8 22:20:49 server kernel: [30892.330579] Node 0 DMA32: 609*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5604kB Oct 8 22:20:49 server kernel: [30892.330605] 1522 total pagecache pages Oct 8 22:20:49 server kernel: [30892.330610] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.330615] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.330621] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.330625] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.333018] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.333018] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.333018] 424367 pages shared Oct 8 22:20:49 server kernel: [30892.333018] 503658 pages non-shared Seems like there isn't enough memory for this domU. But there is no any memory problems reported in munin monitoring: As you see system uses around 0.2G and 1G is available. So my question is: Is it xen specific problem, that real memory usage and memory usage that shows munin are different (I've never seen such problems oh real hardware machines)? Or maybe it is just monitoring problem, that can't catch moment when there is unusual high load and domU go down? And how I can to defeat this problem? it is really annoying to catch messages in e-mail that domU went down. Btw, such situation was when domU had 2G memory.

    Read the article

  • How to repair unbootable Fedora install

    - by Cerin
    How do you repair/reinstall Fedora without deleting any existing partitions or data? I was attempting to upgrade some old Fedora 13 servers to 17, following the instructions in the wiki. After the 14-15 upgrade, rebooting resulted in the output: Dropping to debug shell. sh: can't access tty; job control turned off dracut:/# Running dmesg also shows: dracut Warning: No root device "block:/dev/mapper/VolGroup-lv_root" found Googling shows this error is typically related to some weird RAID issues, but my server is a virtual machine not using any RAID. Using a rescue CD, I can chroot /mnt/sysimage, and all packages and data still seems to be there. How do I make the system bootable again?

    Read the article

  • Revenue Recognition: Performance Obligation Pass a Hurdle

    - by Theresa Hickman
    I met up with Seamus Moran, our resident accounting expert, to get his thoughts about the latest happenings with IFRS. Last week, on March 13,  the comment period on the FASB and IASB exposure draft “Revenue From Contracts with Customers” closed.  FASB and IASB have just over 20 comment letters – a very small number.  The implication is that that the exposure draft does reflect general acceptance, and therefore will be published as both a US and Internationally Generally Accepted Accounting Standard. At a recent conference call, FASB and IASB expected to complete their report to both Boards on the comments by early summer, complete their deliberation of the comments by the fall and draft the final standard text by late this year. It is assumed the concept of Performance Obligations would become US GAAP and IFRS in place of the existing standards.  They confirmed that all existing US GAAP and IFRS guidelines would be withdrawn, and that they were in dialogue with the SEC on withdrawing the SEC guidelines on the revenue issue as well.The open question is when will Performance Obligations become effective?  The Boards have said that they would like this Revenue Recognition standard and the the Lease Accounting standard to be effective at the same time because what isn’t either insurance, interest, or a lease is a revenue arrangement.  However, ascertaining what is generally acceptable in respect of Leases is proving a little elusive, and the Boards have recently diverged a little on the P&L side of the accounting (although both are in agreement that there will be no off-balance sheet leases).  It is therefore likely that the Lease standard might be delayed. One wonders if the Boards will  define effectivity of the Revenue standard independently of the Lease standard or if they will stick with their resolve to make them co-effective.  The Boards have also said that neither standard will be effective before June 2015.Here is the gist of the new Revenue Recognition principle and the steps to apply it:Recognize revenue to depict the transfer of goods or services in an amount that reflects the consideration expected to be entitled in exchange for those goods and services.Steps to apply the core principles: Identify the contract with the customer Identify the separate performance obligations Determine the transaction price Allocate the the transaction price Recognize Revenue when a performance obligation is satisfied  

    Read the article

  • Proper upstart script for hamachi?

    - by ALQ
    I've been looking for a script to supervise hamachi and mostly got it to work except for the part that daemonizes hamachid. The following script works but is not perfect. I'm not familiar with upstart internals to debug this further. description "Hamachi VPN" author "Alexis Le-Quoc <[email protected]>" start on (net-device-up and local-filesystems and runlevel [2345]) stop on runlevel [016] respawn oom never env DAEMON=/opt/logmein-hamachi/bin/hamachid pre-start script [ -x "$DAEMON" ] end script # should really be: # expect daemon # exec $DAEMON exec $DAEMON debug > /dev/null

    Read the article

  • Thematic map contd.

    - by jsharma
    The previous post (creating a thematic map) described the use of an advanced style (color ranged-bucket style). The bucket style definition object has an attribute ('classification') which specifies the data classification scheme to use. It's values can be one of {'equal', 'quantile', 'logarithmic', 'custom'}. We use logarithmic in the previous example. Here we'll describe how to use a custom algorithm for classification. Specifically the Jenks Natural Breaks algorithm. We'll use the Javascript implementation in geostats.js The sample code above needs a few changes which are listed below. Include the geostats.js file after or before including oraclemapsv2.js <script src="geostats.js"></script> Modify the bucket style definition to use custom classification Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}    bucketStyleDef = {       numClasses : colorSeries[colorName].classes,       classification: 'custom', //'logarithmic',  // use a logarithmic scale       algorithm: jenksFromGeostats,       styles: theStyles,       gradient:  useGradient? 'linear' : 'off'     }; The function, which implements the custom classification scheme, is specified as the algorithm attribute value. It must accept two input parameters, an array of OM.feature and the name of the feature attribute (e.g. TOTPOP) to use in the classification, and must return an array of buckets (i.e. an array of or OM.style.Bucket  or OM.style.RangedBucket in this case). However the algorithm also needs to know the number of classes (i.e. the number of buckets to create). So we use a global to pass that info in. (Note: This bug/oversight will be fixed and the custom algorithm will be passed 3 parameters: the features array, attribute name, and number of classes). So createBucketColorStyle() has the following changes Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} var numClasses ; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) {    var theBucketStyle;    var bucketStyleDef;    var theStyles = [];    //var numClasses ; numClasses = colorSeries[colorName].classes; ... and the function jenksFromGeostats is defined as Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} function jenksFromGeostats(featureArray, columnName) {    var items = [] ; // array of attribute values to be classified    $.each(featureArray, function(i, feature) {         items.push(parseFloat(feature.getAttributeValue(columnName)));    });    // create the geostats object    var theSeries = new geostats(items);    // call getJenks which returns an array of bounds    var theClasses = theSeries.getJenks(numClasses);    if(theClasses)    {     theClasses[theClasses.length-1]=parseFloat(theClasses[theClasses.length-1])+1;    }    else    {     alert(' empty result from getJenks');    }    var theBuckets = [], aBucket=null ;    for(var k=0; k<numClasses; k++)    {             aBucket = new OM.style.RangedBucket(             {low:parseFloat(theClasses[k]),               high:parseFloat(theClasses[k+1])             });             theBuckets.push(aBucket);     }     return theBuckets; } A screenshot of the resulting map with 5 classes is shown below. It is also possible to simply create the buckets and supply them when defining the Bucket style instead of specifying the function (algorithm). In that case the bucket style definition object would be    bucketStyleDef = {      numClasses : colorSeries[colorName].classes,      classification: 'custom',        buckets: theBuckets, //since we are supplying all the buckets      styles: theStyles,      gradient:  useGradient? 'linear' : 'off'    };

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >