Search Results

Search found 4043 results on 162 pages for 'mod cluster'.

Page 145/162 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • How To Delete Built-in Windows 7 Power Plans (and Why You Probably Shouldn’t)

    - by The Geek
    Do you actually use the Windows 7 power management features? If so, have you ever wanted to just delete one of the built-in power plans? Here’s how you can do so, and why you probably should leave it alone. Just in case you’re new to the party, we’re talking about the power plans that you see when you click on the battery/plug icon in the system tray. The problem is that one of the built-in plans always shows up there, even if you only use custom plans. When you go to “More power options” on the menu there, you’ll be taken to a list of them, but you’ll be unable to get rid of any of the built-in ones, even if you have your own. You can actually delete the power plans, but it will probably cause problems, so we highly recommend against it. If you still want to proceed, keep reading. Delete Built-in Power Plans in Windows 7 Open up an Administrator mod command prompt by right-clicking on the command prompt and choosing “Run as Administrator”, then type in the following command, which will show you a whole list of the plans. powercfg list Do you see that really long GUID code in the middle of each listing? That’s what we’re going to need for the next step. To make it easier, we’ll provide the codes here, just in case you don’t know how to copy to the clipboard from the command prompt. Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver) Before you do any deleting, what you’re going to want to do is export the plan to a file using the –export parameter. For some unknown reason, I used the .xml extension when I did this, though the file isn’t in XML format. Moving on… here’s the syntax of the command: powercfg –export balanced.xml 381b4222-f694-41f0-9685-ff5bb260df2e This will export the Balanced plan to the file balanced.xml. And now, we can delete the plan by using the –delete parameter, and the same GUID.  powercfg –delete 381b4222-f694-41f0-9685-ff5bb260df2e If you want to import the plan again, you can use the -import parameter, though it has one weirdness—you have to specify the full path to the file, like this: powercfg –import c:\balanced.xml Using what you’ve learned, you can export each of the plans to a file, and then delete the ones you want to delete. Why Shouldn’t You Do This? Very simple. Stuff will break. On my test machine, for example, I removed all of the built-in plans, and then imported them all back in, but I’m still getting this error anytime I try to access the panel to choose what the power buttons do: There’s a lot more error messages, but I’m not going to waste your time with all of them. So if you want to delete the plans, do so at your own peril. At least you’ve been warned! Similar Articles Productive Geek Tips Learning Windows 7: Manage Power SettingsCreate a Shortcut or Hotkey to Switch Power PlansDisable Power Management on Windows 7 or VistaChange the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateDisable Windows Vista’s Built-in CD/DVD Burning Features TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Unreal Tournament 3 vs UDK: What Should I Choose?

    - by Matt Christian
    Many people in the mod community were very excited to see the release of the Unreal Developer Kit (UDK) a few months ago.  Along with generating excitement into a very dedicated community, it also introduced many new modders into a flourishing area of indie-development.  However, since UDK is free, most beginners jump right into UDK, which is OK though you might just benefit more from purchasing a shelf-copy of Unreal Tournament 3. UDK UDK is a free full version of UnrealEd (the editor environment used to create games like Gears of War 1/2, Bioshock 1/2, and of course Unreal Tournament 3).  The editor gives you all the features of the editor from the shelf-copy of the game plus some refinements in many of the tools.  (One of the first things you'll find about UnrealEd is that it's a collection of tools grouped into the same editor so it really isn't a single 'tool') Interestingly enough, Epic is allowing you to sell any game made in UDK with a few catches.  First off, you must purchase a liscense for your game (which, I THINK is aproximately $99 starting).  Secondly, you must pay 25% of all profits for the first $5,000 of your game revenue to them (about $1250).  Finally, you cannot use any of the 'media' provided in UDK for your game.  UDK provides sample meshes, textures, materials, sounds, and other sample pieces of media pulled (mostly) from Unreal Tournament 3. The final point here will really determine whether you should use UDK.  There is a very small amount of media provided in UDK for someone to go in and begin creating levels without first developing your own meshes, textures, and other media.  Sure, you can slap together a few unique levels, though you will end up finding yourself restriced to the same items over and over and over.  This is absolutely how professional game development is; you are 'given' (typically liscensed or built in-house) an engine/editor and you begin creating all the content for the game and placing it.  UDK is aimed toward those who really want to build their game content from scratch with a currently existing engine.  It is not suited for someone who would like to simply build levels and quick mods without learning external 3D programs and image editing software. Unreal Tournament 3 Unless you have a serious grudge against FPS's, Epic, or your computer sucks, there really is no reason not to own this game for PC.  You can pick it up on Steam or Amazon for around $20 brand new.  Not only are you provided with a full single-player and multiplayer game, but you are given the entire UnrealEd 3.0 including all of the content used to build UT3.  If you want to start building levels and mods quickly for UT3, you should absolutely pick up a shelf-copy. However, as off-the-shelf UT3 is a few years old now, the tools have not been updated for quite a while.  Compared to UDK, the menus are more difficult to navigate through and take more time getting used to.  Since UDK is updated almost every month, there are new inclusions to the editor that may not be in UT3 (including the future addition of 3D!).  I haven't worked enough with shelf UT3 to see if there are more features in UDK or if they both feature the same stuff in different forms, however you should remember that the Unreal Engine 3.0 has undergone numerous upgrades between it's launch and Gears of War 2 (in fact, Epic had a conference to show off what changed just between the Gears of Wars games). Since UT3 has much more core content, someone who wants to focus on level editing or modding the core UT3 game may find their needs better suited with an off-the-shelf copy of UT3.  If that level designer has a team that is generating custom assets, they may be better off with UDK. The choice is now yours...

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Unable to make properly work the Ralink rt3090 wifi card on my Lenovo B575 with Kubuntu 12.04 64bit

    - by Sebastien
    I look and tried many solution from many thread but I still unable to make this wifi card work properly (very slow, unable to connect to some wifi spot, etc.). I tried to compile the driver from the ralink website but it doesn't work. Tried to blacklist many mod, withou any result. So here are some command results, hope their help you help me: lspci sebastien@sebastien-portable:~$ lspci 00:00.0 Host bridge: Advanced Micro Devices [AMD] Family 14h Processor Root Complex 00:01.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Wrestler [Radeon HD 6310] 00:01.1 Audio device: Advanced Micro Devices [AMD] nee ATI Wrestler HDMI Audio [Radeon HD 6250/6310] 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 42) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge (rev 40) 00:14.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:15.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI SB700/SB800/SB900 PCI to PCI bridge (PCIE port 0) 00:15.2 PCI bridge: Advanced Micro Devices [AMD] nee ATI SB900 PCI to PCI bridge (PCIE port 2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 0 (rev 43) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 1 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 2 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 3 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 4 00:18.5 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 6 00:18.6 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 5 00:18.7 Host bridge: Advanced Micro Devices [AMD] Family 12h/14h Processor Function 7 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 03:00.0 Network controller: Ralink corp. RT3090 Wireless 802.11n 1T/1R PCIe lsmod sebastien@sebastien-portable:~$ lsmod Module Size Used by rt2800pci 18715 0 arc4 12529 2 rt2800lib 58925 1 rt2800pci crc_ccitt 12667 1 rt2800lib rt2x00pci 14577 1 rt2800pci rt2x00lib 55301 3 rt2800pci,rt2800lib,rt2x00pci mac80211 506816 3 rt2800lib,rt2x00pci,rt2x00lib cfg80211 205544 2 rt2x00lib,mac80211 eeprom_93cx6 12725 1 rt2800pci rt2860sta 864748 0 snd_hda_codec_conexant 62128 1 snd_hda_codec_hdmi 32474 1 uvcvideo 72627 0 rts5139 351143 0 snd_hda_intel 33773 4 videodev 98259 1 uvcvideo snd_hda_codec 127706 3 snd_hda_codec_conexant,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec psmouse 87692 0 v4l2_compat_ioctl32 17128 1 videodev serio_raw 13211 0 k10temp 13166 0 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec sp5100_tco 13791 0 i2c_piix4 13301 0 snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi ideapad_laptop 18234 0 sparse_keymap 13890 1 ideapad_laptop rfcomm 47604 0 joydev 17693 0 snd_seq_midi_event 14899 1 snd_seq_midi bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 18 snd_hda_codec_conexant,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd mac_hid 13253 0 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm lp 17799 0 parport 46562 3 parport_pc,ppdev,lp usbhid 47199 0 hid 99559 1 usbhid r8169 62099 0 radeon 804372 4 video 19596 0 wmi 19256 0 ttm 76949 1 radeon drm_kms_helper 46978 1 radeon drm 242038 6 radeon,ttm,drm_kms_helper i2c_algo_bit 13423 1 radeon iwconfig sebastien@sebastien-portable:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"4CE6763F0E0A" Mode:Managed Frequency:2.452 GHz Access Point: 4C:E6:76:3F:0E:0A Bit Rate=54 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:100 Missed beacon:0 eth0 no wireless extensions.

    Read the article

  • Managing game state / 'what to update' within an XNA game 'screen'

    - by codinghands
    Note - having read through other GDev questions suggested when writing this question I'm confident this isn't a dupe. Of course, it's 3am and I'm likely wrong, so please mod as such if so. I'm trying to figure out how best to manage state within my game screens - please bare with me though! At the moment I'm using a heavily modified version of the fantastic game state management example on the XNA site available here. This is working perfectly for my 'Screens' - 'IntroScreen' with some shiny logos, 'TitleScreen' and a 'MenuScreen' stacked on top for the title and menu, 'PlayScreen' for the actual gameplay, etc. Each screen has the a bunch of sprites, and an 'Update' and 'Draw', managed by a 'ScreenManager'. In addition to the above, and as suggested as an answer to my other question here, most screens have a 'GameProcessQueue' class full of 'GameProcess'es which lets me do just about anything (animations, youbetcha!), in any order, in sequence or parallel. Why mention all this? When I talk about managing game state I'm thinking more for complex scenarios within a 'Screen'. 'TitleScreen', 'MenuScreen' and the like are all relatively simple. 'Play Screen' less so. How do people manage the different 'states' within the screen (or whatever you call it) that 'does' gameplay? (for me, the 'PlayScreen') I've thought about the following: Enum of different states in the Screen, 'activeState' enum-type variable, switching on the enum in the Screen Update() loop to determine what Screen Update 'sub'-function is called. I can see this getting hairy pretty fast though as screens get more complex and with the 'PlayScreen' becoming a behemoth mega-class. 'State' class with Update loop - a Screen can have any number of 'States', 1+ of which are 'active'. Screen update loop calls update on all active states. States themselves know which screen they belong to, and may even belong to a 'StateManager' which handles transitioning from one state to the next. Once a state is over it's removed from the ScreenState list. The Screen doesn't need a bunch of GameProcessQueues, each State has its own. Abstract Screen further to be more flexible - I can see the similarities between what I've got (game 'Screens' handled by a ScreenManager) and what I want (states within a screen, and a mechanism to manage them). However at the moment I see 'Screens' as high level and very distinct ('PlayScreen' with baddies != 'MenuScreen' with 4 words and event handlers), where as my proposed 'States' are more intrinsically tied to a specific screen with complex requirements. I think. This is for a turn-based board game, so it's easier to define things as a discrete series of steps (IntroAnimation - P1Turn - P2Turn - P1Turn ... - GameOver - .... Obviously with an open-world RPG things are very different, but any advice in this scenario is appreciated. If I'm just going OOP-crazy please say so. Similarly I'm concious there's a huge amount on this site re: state management. But as my first 'serious' game after a couple of false starts I'd like to get this right, and would rather be harassed and modded down than never ask :)

    Read the article

  • apt-get upgrade stuck at the same package

    - by decibyte
    Current status I've started to suspect this is not an Ubuntu issue, but related to the internet connection here at my work. Until I'm sure, Im leaving my question below: Original question I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do? Update After following instructions in the answer by @lpanebr, it is now stuck at the linux-firmware package. So, maybe it's a more general problem than being related to specific package(s)? Although it did download some packages without problems before getting stuck at linux-firmware.

    Read the article

  • Grub2 : Windows 7 can't boot installing with Ubuntu 10.04 on different hard drive

    - by dellphi
    I use a dual boot with two hard disks and two OS is Ubuntu 10.04 and Windows 7. Windows 7 installed on the first disk, first partition. Grub is installed on a second hard disk MBR, and Ubuntu installed on an extended partition on a second hard drive. When I select Windows 7 on the Grub menu, the HDD lamp lights up briefly and then black screen on the monitor, with the status of the keyboard is still functioning. Until now (with the default boot from first HDD), I have to press F12 to get into the Grub to run Linux on a second HDD. ================ fdisk -l ================================ dellph1@dellph1-desktop:~$ fdisk -l omitting empty partition (5) Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00087dec Device Boot Start End Blocks Id System /dev/sda1 * 1 23104 185582848+ 7 HPFS/NTFS /dev/sda2 23105 121601 791177122 5 Extended /dev/sda5 36107 74408 307660783+ 7 HPFS/NTFS /dev/sda6 74409 100081 206218341 7 HPFS/NTFS /dev/sda7 100082 121601 172859368+ 7 HPFS/NTFS Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6d43dfb2 Device Boot Start End Blocks Id System /dev/sdb1 1 10030 80560066 5 Extended /dev/sdb5 * 1 5560 44657601 83 Linux /dev/sdb6 5560 9387 30736384 83 Linux /dev/sdb7 9387 10030 5164032 82 Linux swap / Solaris dellph1@dellph1-desktop:~$ ================= grub.cfg ================== # DO NOT EDIT THIS FILE # It is automatically generated by /usr/sbin/grub-mkconfig using templates from /etc/grub.d and settings from /etc/default/grub # BEGIN /etc/grub.d/00_header if [ -s $prefix/grubenv ]; then load_env fi set default="0" if [ ${prev_saved_entry} ]; then set saved_entry=${prev_saved_entry} save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z ${boot_once} ]; then saved_entry=${chosen} save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n ${have_grubenv} ]; then if [ -z ${boot_once} ]; then save_env recordfail; fi; fi } insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1024x768 insmod gfxterm insmod vbe if terminal_output gfxterm ; then true ; else # For backward compatibility with versions of terminal.mod that don't # understand terminal_output terminal gfxterm fi fi insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ ${recordfail} = 1 ]; then set timeout=-1 else set timeout=5 fi END /etc/grub.d/00_header BEGIN /etc/grub.d/05_debian_theme insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 insmod jpeg if background_image /usr/share/backgrounds/CurlsbyCandy.jpg ; then set color_normal=white/black set color_highlight=black/light-gray else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi END /etc/grub.d/05_debian_theme BEGIN /etc/grub.d/10_linux menuentry 'Ubuntu, with Linux 2.6.32-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 linux /boot/vmlinuz-2.6.32-24-generic root=UUID=2f014a3a-35f3-4d05-87aa-34ca677160b7 ro splash vga=795 quiet splash nomodeset video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap initrd /boot/initrd.img-2.6.32-24-generic } menuentry 'Ubuntu, with Linux 2.6.32-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod ext2 set root='(hd1,5)' search --no-floppy --fs-uuid --set 2f014a3a-35f3-4d05-87aa-34ca677160b7 echo 'Loading Linux 2.6.32-24-generic ...' linux /boot/vmlinuz-2.6.32-24-generic root=UUID=2f014a3a-35f3-4d05-87aa-34ca677160b7 ro single splash vga=795 echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-24-generic } END /etc/grub.d/10_linux BEGIN /etc/grub.d/30_os-prober menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod ntfs set root='(hd0,1)' search --no-floppy --fs-uuid --set 5cac2139ac210f58 chainloader +1 } END /etc/grub.d/30_os-prober BEGIN /etc/grub.d/40_multisystem Ajout de MultiSystem MULTISYSTEM MENU menuentry "PLoP Boot Manager" { linux16 /boot/plpbt } menuentry "Smart Boot Manager" { search --set -f /boot/sbootmgr.dsk linux16 /boot/memdisk initrd16 /boot/sbootmgr.dsk } FIN MULTISYSTEM MENU END /etc/grub.d/40_multisystem ================================================ I want to keep the Grub on the second HDD. I have been using the Startup Manager, Boot Manager and Grub Customizer, and this problem still unsolved. The easiest thing that I can possibly do is to install Grub on first HDD, but I was curious and maybe someone can help.

    Read the article

  • SSI: Failed String Comparison with CGI Environment Variable [migrated]

    - by Calyo Delphi
    I am currently working on developing a personal website. It's not my first time doing this, but this is my first major foray into implementing SSI. I've run myself into a wall, however, with an if-else directive that uses one of the CGI environment variables as part of its comparison. Even after some limited attempts at debugging, all of the output and documentation that I have means that the comparisons being made should fail outright. This is not the case, and the wrong evaluation is being made by the if-else directive. Here's the code in the file index.shtml: <head> <!--#set var="page" value="Home" --> <!--#include file="headlinks.shtml" --> <style> img#ref { float: right; margin-left: 8px; border-width: 0px; } </style> </head> Here's the code in the file headlinks.shtml: <title><!--#echo var="page" --> &ndash; <!--#echo var="HTTP_HOST" --></title> <!--#set var="docroot" value="${DOCUMENT_ROOT}" --> <!--#echo var="docroot" --> <!--#if expr="( $docroot != '/Applications/MAMP/htdocs' ) || ( $docroot != '/home/dragarch/public_html' )" --> <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> <!--#else --> <link rel="stylesheet" type="text/css" href="style.css"> <link rel="shortcut icon" type="image/svg+xml" href="favicon.svg" /> <!--#endif --> And here's the output for the file index.shtml: <title>Home &ndash; dragarch</title> /Applications/MAMP/htdocs <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> Both style.css and favicon.svg are in the document root with index.shtml, so the if directive should fail and default to the output of the else directive. As you can see, while the document root (which is currently the MAMP htdocs folder on my own notebook) is correct according to the output of the echo directive, the comparison in the if-else directive fails to compare the strings properly. I'm using this page for my documentation: http://httpd.apache.org/docs/2.2/mod/mod_include.html I'm at a complete loss as to why this is the case, and need a bit of help here. EDIT: I should note that dragarch is a hostname that I configured in /etc/hosts to point to 127.0.0.1 so I could test the site without having to use localhost. It has no real effect on the functionality of anything, other than to just act as a prettier hostname to use.

    Read the article

  • How can I change the color of the text in my iFrame? [closed]

    - by VinylScratch
    I have code here: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Frag United Banlist</title> </head> <body> <h1>Tekkit Banlist</h1> <?php // change these things $server = "server-host"; $dbuser = "correct-user"; $dbpass = "correct-password"; $dbname = "correct-database"; mysql_connect($server, $dbuser, $dbpass); mysql_select_db($dbname); $result = mysql_query("SELECT * FROM banlist ORDER BY id DESC"); //This will display the most recent by id edit this query how you see fit. Limit, Order, ect. echo "<table width=100% border=1 cellpadding=3 cellspacing=0>"; echo "<tr style=\"font-weight:bold\"> <td>ID</td> <td>User</td> <td>Reason</td> <td>Admin/Mod</td> <td>Time</td> <td>Ban Length</td> </tr>"; while($row = mysql_fetch_assoc($result)){ if($col == "#eeeeee"){ $col = "#ffffff"; }else{ $col = "#eeeeee"; } echo "<tr bgcolor=$col>"; echo "<td>".$row['id']."</td>"; echo "<td>".$row['user']."</td>"; echo "<td>".$row['reason']."</td>"; echo "<td>".$row['admin']."</td>"; //Convert Epoch Time to Standard format $datetime = date("F j, Y, g:i a", $row['time']); echo "<td>$datetime</td>"; $dateconvert = date("F j, Y, g:i a", $row['length']); if($row['length'] == "0"){ echo "<td>None</td>"; }else{ echo "<td>$dateconvert</td>"; } echo "<td>".$row['id']."</td>"; echo "</tr>"; } echo"</table>" ?> </div> </body></html> And I am trying to make it so that when I put it in this iframe: <iframe src="http://bans.fragunited.net/" width="100%" length="100%"><p>Your browser does not support iframes.</p></iframe> But if you go to this page, fragunited.net/bans, (not bans.fragunited.net) the text is black and I want it to be white so you can actually see it. Sorry for the large amount of code, however I don't know where you have to put the code to change the color.

    Read the article

  • Older SAS1 hardware Vs. newer SAS2 hardware

    - by user12620172
    I got a question today from someone asking about the older SAS1 hardware from over a year ago that we had on the older 7x10 series. They didn't leave an email so I couldn't respond directly, but I said this blog would be blunt, frank, and open so I have no problem addressing it publicly. A quick history lesson here: When Sun first put out the 7x10 family hardware, the 7410 and 7310 used a SAS1 backend connection to a JBOD that had SATA drives in it. This JBOD was not manufactured by Sun nor did Sun own the IP for it. Now, when Oracle took over, they had a problem with that, and I really can’t blame them. The decision was made to cut off that JBOD and it’s manufacturer completely and use our own where Oracle controlled both the IP and the manufacturing. So in the summer of 2010, the cut was made, and the 7410 and 7310 had a hardware refresh and now had a SAS2 backend going to a SAS2 JBOD with SAS2 drives instead of SATA. This new hardware had two big advantages. First, there was a nice performance increase, mostly due to the faster backend. Even better, the SAS2 interface on the drives allowed for a MUCH faster failover between cluster heads, as the SATA drives were the bottleneck on the older hardware. In September of 2010 there was a major refresh of the rest of the 7000 hardware, the controllers and the other family members, and that’s where we got today’s current line-up of the 7x20 series. So the 7x20 has always used the new trays, and the 7410 and 7310 have used the new SAS2 trays since last July of 2010. Now for the bad news. People who have the 7410 and 7310 from BEFORE the July 2010 cutoff have the models with SAS1 HBAs in them to connect to the older SAS1 trays. Remember, that manufacturer cut all ties with us and stopped making the JBOD, so there’s just no way to get more of them, as they don’t exist. There are some options, however. Oracle support does support taking out the SAS1 HBAs in the old 7410 and 7310 and put in newer SAS2 HBAs which can talk to the new trays. Hey, I didn’t say it was a great option, I just said it’s an option. I fully realize that you would then have a SAS1 JBOD full of SATA drives that you could no longer connect. I do know a client that did this, and took the SAS1 JBOD and connected it to another server and formatted the drives and is using it as a plain, non-7000 JBOD. This is not supported by Oracle support. The other option is to just keep it as-is, as it works just fine, but you just can’t expand it. Then you can get a newer 7x20 series, and use the built-in ZFSSA replication feature to move the data over. Now you can use the newer one for your production data and use the older one for DR, snaps and clones.

    Read the article

  • Oracle Service Registry 11gR1 Support for Oracle Fusion Middleware/SOA Suite 11g PatchSet 2

    - by Dave Berry
    As you might be aware, a few days back we released Patchset 2 (PS2) for several products in the Oracle Fusion Middleware 11g Release 1 stack including WebLogic Server and SOA Suite. Though there was no patchset released for Oracle Service Registry (OSR) 11g, being an integral part of Fusion Middleware & SOA, OSR 11g R1 ( 11.1.1.2 ) is fully certified with this release. Below is some recommended reading before installing OSR 11g with the new PS2 : OSR 11g R1 & SOA Suite 11g PS2 in a Shared WebLogic Domain If you intend to deploy OSR 11g in the same domain as the SOA Suite 11g, the primary recommendation is to install OSR 11g in its own Managed Server within the same Weblogic Domain as the SOA Suite, as the following diagram depicts : An important pre-requisite for this setup is to apply Patch 9499508, after installation. It basically replaces a registry library - wasp.jar - in the registry application deployed on your server, so as to enable co-deployment of OSR 11g & SOA Suite 11g in the same WLS Domain. The patch fixes a java.lang.LinkageError: loader constraint violation that appears in your OSR system log and is now available for download. The second, equally important, pre-requisite is to modify the setDomainEnv.sh/.cmd file for your WebLogic Domain to conditionally set the CLASSPATH so that the oracle.soa.fabric.jar library is not included in it for the Managed Server(s) hosting OSR 11g. Both these pre-requisites and other OSR 11g Topology Best Practices are covered in detail in the new Knowledge Base article Oracle Service Registry 11g Topology : Best Practices. Architecting an OSR 11g High Availability Setup Typically you would want to create a High Availability (HA) OSR 11g setup, especially on your production system. The following illustrates the recommended topology. The article, Hands-on Guide to Creating an Oracle Service Registry 11g High-Availability Setup on Oracle WebLogic Server 11g on OTN provides step-by-step instructions for creating such an active-active HA setup of multiple OSR 11g nodes with a Load Balancer in an Oracle WebLogic Server cluster environment. Additional Info The OSR Home Page on OTN is the hub for OSR and is regularly updated with latest information, articles, white papers etc. For further reading, this FAQ answers some common questions on OSR. The OSR Certification Matrix lists the Application Servers, Databases, Artifact Storage Tools, Web Browsers, IDEs, etc... that OSR 11g is certified against. If you hit any problems during OSR 11g installation, design time or runtime, the first place to look into is the logs. To find more details about which logs to check when & where, take a look at Where to find Oracle Service Registry Logs? Finally, if you have any questions or problems, there are various ways to reach us - on the SOA Governance forum on OTN, on the Community Forums or by contacting Oracle Support. Yogesh Sontakke and Dave Berry

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the HIVE in Big Data Story. In this article we will understand what is PIG and PIG Latin in Big Data Story. Yahoo started working on Pig for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. What is Pig and What is Pig Latin? Pig is a high level platform for creating MapReduce programs used with Hadoop and the language we use for this platform is called PIG Latin. The pig was designed to make Hadoop more user-friendly and approachable by power-users and nondevelopers. PIG is an interactive execution environment supporting Pig Latin language. The language Pig Latin has supported loading and processing of input data with series of transforming to produce desired results. PIG has two different execution environments 1) Local Mode – In this case all the scripts run on a single machine. 2) Hadoop – In this case all the scripts run on Hadoop Cluster. Pig Latin vs SQL Pig essentially creates set of map and reduce jobs under the hoods. Due to same users does not have to now write, compile and build solution for Big Data. The pig is very similar to SQL in many ways. The Ping Latin language provide an abstraction layer over the data. It focuses on the data and not the structure under the hood. Pig Latin is a very powerful language and it can do various operations like loading and storing data, streaming data, filtering data as well various data operations related to strings. The major difference between SQL and Pig Latin is that PIG is procedural and SQL is declarative. In simpler words, Pig Latin is very similar to SQ Lexecution plan and that makes it much easier for programmers to build various processes. Whereas SQL handles trees naturally, Pig Latin follows directed acyclic graph (DAG). DAGs is used to model several different kinds of structures in mathematics and computer science. DAG Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Zookeeper. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ArchBeat Link-o-Rama for 11/15/2011

    - by Bob Rhubart
    Java Magazine - November/December 2011 - by and for the Java Community Java Magazine is an essential source of knowledge about Java technology, the Java programming language, and Java-based applications for people who rely on them in their professional careers, or who aspire to. Enterprise 2.0 Conference: November 14-17 | Kellsey Ruppel "Oracle is proud to be a Gold sponsor of the Enterprise 2.0 West Conference, November 14-17, 2011 in Santa Clara, CA. You will see the latest collaboration tools and technologies, and learn from thought leaders in Enterprise 2.0's comprehensive conference." The Return of Oracle Wikis: Bigger and Better | @oracletechnet The Oracle Wikis are back - this time, with Oracle SSO on top and powered by Atlassian's Confluence technology. These wikis offer quite a bit more functionality than the old platform. Cloud Migration Lifecycle | Tom Laszewski Laszewski breaks down the four steps in the Set Up Phase of the Cloud Migration lifecycle. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14 Spend the day with your peers learning from Oracle experts in engineered systems, cloud computing, Oracle Coherence, Oracle WebLogic, and more. Registration is free, but seating is limited. SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Live Webcast: New Innovations in Oracle Linux Date: Tuesday, November 15, 2011 Time: 9:00 AM PT / Noon ET Speakers: Chris Mason, Elena Zannoni. People in glass futures should throw stones | Nicholas Carr "Remember that Microsoft video on our glassy future? Or that one from Corning? Or that one from Toyota?" asks Carr. "What they all suggest, and assume, is that our rich natural 'interface' with the world will steadily wither away as we become more reliant on software mediation." Integration of SABSA Security Architecture Approaches with TOGAF ADM | Jeevak Kasarkod Jeevak Kasarkod's overview of a new paper from the OpenGroup and the SABSA institute "which delves into the incorporatation of risk management and security architecture approaches into a well established enterprise architecture methodology - TOGAF." Cloud Computing at the Tactical Edge | Grace Lewis - SEI Lewis describes the SEI's work with Cloudlets, " lightweight servers running one or more virtual machines (VMs), [that] allow soldiers in the field to offload resource-consumptive and battery-draining computations from their handheld devices to nearby cloudlets." Simplicity Is Good | James Morle "When designing cluster and storage networking for database platforms, keep the architecture simple and avoid the complexities of multi-tier topologies," says Morle. "Complexity is the enemy of availability." Mainframe as the cloud? Tom Laszewski There's nothing new about using the mainframe in the cloud, says Laszewski. Let Devoxx 2011 begin! | The Aquarium The Aquarium marks the kick-off of Devoxx 2011 with "a quick rundown of the Java EE and GlassFish side of things."

    Read the article

  • Microsoft Technical Computing

    - by Daniel Moth
    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: "… As we build the Technical Computing initiative, we will invest in three core areas: 1. Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable ‘just-in-time’ processing. This platform will help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. 2. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today’s modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop… to the cluster… to the cloud. 3. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. …" Our Parallel Computing Platform team is directly responsible for item #2, and we work very closely with the teams delivering items #1 and #3. At the same time as the exec email, our marketing team unveiled a website with interviews that I invite you to check out: Modeling the World. Comments about this post welcome at the original blog.

    Read the article

  • ArchBeat Top 10 for November 11-17, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook page for the week of November 11-17, 2012. Developing and Enforcing a BYOD Policy Darin Pendergraft's post includes links to a recent Mobile Access Policy Survey by SANS as well as registration information for a Nov 15 webcast featuring security expert Tony DeLaGrange from Secure Ideas, SANS instructor, attorney and technology law expert Ben Wright, and Oracle IDM product manager Lee Howarth. This Week on the OTN Architect Community Homepage Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster The conclusion of the 3-part OTN ArchBeat Podcast on Future-Proofing your career. WLST Starting and Stopping a WebLogic Environment | Rene van Wijk Oracle ACE Rene van Wijk explores how to start a server with as little input as possible. Cloud Integration White Paper | Bruce Tierney Bruce Tierney shares an overview of Cloud Integration - A Comprehensive Solution, a new white paper he co-authored with David Baum, Rajesh Raheja, Bruce Tierney, and Vijay Pawar. X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Update on My OBIEE / Exalytics Books | Mark Rittman Oracle ACE Director Mark Rittman shares several resources related to his books Oracle Business Intelligence 11g Developers Guide and Oracle Exalytics Revealed, including a podcast interview with Oracle's Paul Rodwick. E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c | Elke Phelps "You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments," reports Elke Phelps. There's a lot more information about this announcement in Elke's post. WebLogic Application Server: free for developers! | Bruno Borges Java blogger Bruno Borges shares news about important changes in the license agreement for Oracle WebLogic Server. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. My iPad & This Cloud Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why the Cloud is making it possible for him to use his iPad for tasks previously relegated to his laptop, and why this same scenario is likely to play out for a great many people. Thought for the Day "In programming, the hard part isn't solving problems, but deciding what problems to solve." — Paul Graham Source: SoftwareQuotes.com

    Read the article

  • 2 Days to Go before MySQL Connect - Focus on Hands-On Labs

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle MySQL team is very eager to meet all MySQL community members, users, customers and partners gathering this weekend in San Francisco for MySQL Connect! Eight different Hands-On Labs will give you the opportunity to get hands-on experience on the following topics. All taking place in Plaza Room A. Saturday: 11.30 amDeveloping Applications with MySQL and Java—Mark Matthews, Oracle 1.00 pm (2.5 hours long)Getting Started with MySQL—Gillian Gunson and Alfredo Kojima, Oracle 4.00 pmGetting Started with MySQL Cluster—Santo Leto, Oracle 5.30 pmImproving Performance with the MySQL Performance Schema—Jesper Krogh, Oracle Sunday: 10.15 am (2.5 hours long) Focus on MySQL Replication—Sven Sandberg and Luis Soares, Oracle 1.15 pm MySQL Utilities—Charles Bell, Oracle 2.45 pm Performance Tuning with MySQL Enterprise Monitor—Mark Matthews, Oracle 4.15 pm MySQL Security: Authentication and Audit—Jonathon Coombes, Oracle Not registered yet? You can still save US$ 300 off the on-site fee! Attending Oracle openWorld or JavaOne? Add MySQL Connect to your registration for only US$100! Register Now!

    Read the article

  • You do not need a separate SQL Server license for a Standby or Passive server - this Microsoft White Paper explains all

    - by tonyrogerson
    If you were in any doubt at all that you need to license Standby / Passive Failover servers then the White Paper “Do Not Pay Too Much for Your Database Licensing” will settle those doubts. I’ve had debate before people thinking you can only have a single instance as a standby machine, that’s just wrong; it would mean you could have a scenario where you had a 2 node active/passive cluster with database mirroring and log shipping (a total of 4 SQL Server instances) – in that set up you only need to buy one physical license so long as the standby nodes have the same or less physical processors (cores are irrelevant). So next time your supplier suggests you need a license for your standby box tell them you don’t and educate them by pointing them to the white paper. For clarity I’ve copied the extract below from the White Paper. Extract from “Do Not Pay Too Much for Your Database Licensing” Standby Server Customers often implement standby server to make sure the application continues to function in case primary server fails. Standby server continuously receives updates from the primary server and will take over the role of primary server in case of failure in the primary server. Following are comparisons of how each vendor supports standby server licensing. SQL Server Customers does not need to license standby (or passive) server provided that the number of processors in the standby server is equal or less than those in the active server. Oracle DB Oracle requires customer to fully license both active and standby servers even though the standby server is essentially idle most of the time. IBM DB2 IBM licensing on standby server is quite complicated and is different for every editions of DB2. For Enterprise Edition, a minimum of 100 PVUs or 25 Authorized User is needed to license standby server.   The following graph compares prices based on a database application with two processors (dual-core) and 25 users with one standby server. [chart snipped]  Note   All prices are based on newest Intel Xeon Nehalem processor database pricing for purchases within the United States and are in United States dollars. Pricing is based on information available on vendor Web sites for Enterprise Edition. Microsoft SQL Server Enterprise Edition 25 users (CALs) x $164 / CAL + $8,592 / Server = $12,692 (no need to license standby server) Oracle Enterprise Edition (base license without options) Named User Plus minimum (25 Named Users Plus per Core) = 25 x 2 = 50 Named Users Plus x $950 / Named Users Plus x 2 servers = $95,000 IBM DB2 Enterprise Edition (base license without feature pack) Need to purchase 125 Authorized User (400 PVUs/100 PVUs = 4 X 25 = 100 Authorized User + 25 Authorized Users for standby server) = 125 Authorized Users x $1,040 / Authorized Users = $130,000  

    Read the article

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • Partner Webcast – Oracle Coherence Applications on WebLogic 12c Grid - 21st Nov 2013

    - by Thanos Terentes Printzios
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence 12c comes with great improvements in ease of use, integration and RASP (Reliability, Availability, Scalability, and Performance) areas. In addition it features an innovating approach to build and deploy Coherence Application as an integral part of typical JEE Enterprise Application. Coherence GAR archives and Coherence Managed Servers are now first-class citizens of all JEE applications and Oracle WebLogic domains respectively. That enables even easier development, deployment and management of complex multi-tier enterprise applications powered by data grid rich features. Oracle Coherence 12c makes your solution ready for the future of big data and always-on-line world. This webcast is focused on demonstrating How to create a Coherence Application using Oracle Enterprise Pack for Eclipse 12.1.2.1.1 (Kepler release). How to package the application in form of GAR archive inside the EAR deployable application. How to deploy the application to multi-tier WebLogic clusters. How to define and configure the WebLogic domain for the tiered clusters hosting both data grid and client JEE applications.  Finally we will expose the data in grid to external systems using REST services and create a simple web interface to the underlying data using Oracle ADF Faces components. Join us on this technology webcast, to find out more about how Oracle Cloud Application Frameworks brings together the key industry leading technologies of Oracle Coherence and Weblogic 12c, delivering next-generation applications. Agenda: Introduction to Oracle Coherence What's new in 12c release POF annotations Live Events Elastic Data (Flash storage support) Managed Coherence Servers for Oracle WebLogic Coherence Applications (Grid Archive) Live Demonstration Creating and configuring Coherence Servers forming the data tier cluster Creating a simple Coherence Grid Application in Eclipse Adding REST support and creating simple ADF Faces client application Deploying the grid and client applications to separate tiers in WebLogic topology HA capabilities of the data tier Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • PHP MVC error handling, view display and user permissions

    - by cen
    I am building a moderation panel from scratch in a MVC approach and a lot of questions cropped up during development. I would like to hear from others how they handle these situations. Error handling Should you handle an error inside the class method or should the method return something anyway and you handle the error in controller? What about PDO exceptions, how to handle them? For example, let's say we have a method that returns true if the user exists in a table and false if he does not exist. What do you return in the catch statement? You can't just return false because then the controller assumes that everything is alright while the truth is that something must be seriously broken. Displaying the error from the method completely breaks the whole design. Maybe a page redirect inside the method? The proper way to show a view The controller right now looks something like this: include('view/header.php'); if ($_GET['m']=='something') include('view/something.php'); elseif ($_GET['m']=='somethingelse') include('view/somethingelse.php'); include('view/foter.php'); Each view also checks if it was included from the index page to prevent it being accessed directly. There is a view file for each different document body. Is this way of including different views ok or is there a more proper way? Managing user rights Each user has his own rights, what he can see and what he can do. Which part of the system should verify that user has the permission to see the view, controller or view itself? Right now I do permission checks directly in the view because each view can contain several forms that require different permissions and I would need to make a seperate file for each of them if it was put in the controller. I also have to re-check for the permissions everytime a form is submitted because form data can be easily forged. The truth is, all this permission checking and validating the inputs just turns the controller into a huge if/then/else cluster. I feel like 90% of the time I am doing error checks/permissions/validations and very little of the actual logic. Is this normal even for popular frameworks?

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Elastic PaaS with WebLogic and OpenStack, part I

    - by Jernej Kaše
    In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service. Middleware / Platform as a Service goals First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities. To break it down: - PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...) - Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer] - Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically. PaaS use case To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month. The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational. The Challenges Some of the challenges the IT Administrator is facing are: - how are we going to ensure the processing power? - how are we going to provision the (virtual) machines, Java EE platform and deploy the application? - how are we going to monitor the SLA? - how are we going to react to SLA, and increase capacity?  The Ideal Solution Ideally, the whole process should be automated, "set it and forget" and require no human interaction: - The vendor packages the solution as deployable image(s) - The images are deployed to the IaaS - From there, automated processes take care of SLA  Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris OracleSolaris provides OS and virtualisation through Solaris Zones OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API) WebLogic 12c with Dynamic Clusters provides the Platform Trafic Manager provides load balancing On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will: - provision a new virtual machine based on image which is configured for the WebLogic domain - add the machine to WebLogic domain - Increase the number of servers in dynamic cluster - Start the newly provisioned server  Stay tuned for part II The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA. Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >