Search Results

Search found 32660 results on 1307 pages for 'big number'.

Page 393/1307 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • Microsoft Generation 4 Datacenter using ITPACs

    - by Eric Nelson
    Microsoft is continuing to make significant investments in Datacenter technology and is focused on solving issues such as long lead times, significant up-front costs and over capacity. Enter the world of modular Datacenters and ITPACs – IT Pre-Assembled Components. In simple terms – air handling and IT units which are pre-assembled (looking somewhat like a container) and then installed on concrete bases. Each unit can hold  between 400 and 2500 servers (which means many more virtual machines depending on your density) Kevin Timmons’, manager of the datacenter operations team, just posted a great post digging into the detail One Small Step for Microsoft’s Cloud, Another Big Step for Sustainability which includes a short video on how we build one of these ITPACs. You might also want to check out this video from the PDC:

    Read the article

  • Getting a solid understanding of Linux fundamentals

    - by JoshEarl
    I'm delving into the Linux world again as a diversion from my Microsoft-centric day job, and every time I tackle a new project I find it a frustrating exercise in trial and error. One thing that I always try to do when learning something new is figure out what the big pieces are and how they work together. I haven't yet come across a resource that explains Linux at this level. Resources seem to be either aimed at the barely computer literate crowd (Linux doesn't bite. Promise!) or the just compile the kernel and make your own distro crowd. I'm looking for a "JavaScript: The Good Parts" type of road map that doesn't necessarily answer all my questions so much as help me understand what questions I need to be asking. Any suggestions?

    Read the article

  • Can I use a genetic algorithm for balancing character builds?

    - by Renan Malke Stigliani
    I'm starting to build a online PVP (duel like, one-on-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I have never done anything like this, I'm still thinking about the math behind the levels/skills/specials balance. So I thought a good way of testing the best builds/combos, would be to implement a Genetic Algorithm. It'd be like this: Generate a big group of random characters Make them fight, level them up accordingly to their victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try and make even better characters Add some more random chars, emulating new players Repeat the process for some time, or util I find some chars who can beat everyone's butt I could then play with the math and try to find better balances to make sure that the top x% of chars would be a mix of various build types. So, is it a good idea, or is there some other, easier method to do the balancing?

    Read the article

  • Pac-Man Hiding Spot Makes High Scores a Snap

    - by Jason Fitzpatrick
    This interesting bug (feature?) in the original Pac-Man game makes it easy to hide from the ghosts, ensuring a long-lived and well-fed Pac-Man. Check out the video above to see the black hole you can park Pac-Man in to avoid assault by the ghosts. There’s two big caveats with this trick: first, it only works in the original game (spin offs and modern adaptations won’t necessarily have it but the original machine and MAME implementations of it will). Second, it doesn’t work if the ghosts see you park yourself there; you need to slip into the spot our of their direct line of sight. Still craving more Pac-Man goodness? Check out these cheat maps that map out all the patterns you need to follow to sneak through every level unmolested by ghosts. [via Neatorama] How To Be Your Own Personal Clone Army (With a Little Photoshop) How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    Read the article

  • Oracle Services for Oracle Engineered Systems

    - by Bandari Huang
    ACS(Advanced Customer Support) for Engineered Systems Oracle Solution Support Center Oracle Advanced Monitoring and Resolution ACS for Exadata Oracle Exadata Start-Up Pack Exadata Disk Swap Service Exadata Re-rack Service ACS for Exalogic Oracle Exalogic Start-Up Pack ACS for SuperCluster Oracle SPARC SuperCluster Start-Up Pack ACS for Exalytics Oracle Exalytics Start-Up Pack ACS for BDA(Big Data Appliance) ACS for ODA(Oracle Database Appliance) ACS for ZSA(ZFS Storage Appliance) ACS for ZBA(ZFS Backup Appliance) OCS(Oracle Consulting Services) for Engineered Systems Oracle Expert Services for Oracle Engineered Systems Oracle Consulting Virtualization Services  OCS for Exadata Oracle Exadata Architecture Service Oracle Exadata Architecture Transition Service Oracle Exadata Implementation Service Oracle Expert Services for SAP on Oracle Exadata Oracle Exadata Roadmap Service OCS for Exalogic Oracle Exalogic Architecture Service Oracle Exalogic Implementation Service  

    Read the article

  • Is SOA really dead?

    - by Ahsan Alam
    I have come across many articles/blogs where authors have strongly hailed the death of Service Oriented Architecture (SOA). I could almost hear the laughter pouring out of their writings. Being a big supporter of SOA, I have found myself wondering – have I been following the wrong path all along? Do I need to change the way I think? Then I started to look around. Many newer technologies and concepts have evolved in the past few years. People are starting to take advantage of cloud computing, SAAS (Software as a Service), multitudes of on-demand platforms and many more. Now, I started thinking – is SOA really dead? In order to effectively utilize these newer concepts, I believe we need SOA more than ever because it gives us loose-coupling. People often forget that the key principal behind SOA is loose-coupling. We cannot achieve SOA just by throwing services (WCF, Web Service); we need loosely coupled systems.

    Read the article

  • KVM swtitch , screen resolution problem

    - by Vagelism
    I use the lates ubuntu version. Till to day I use it with an acer trravelmate4070 and an LG screen in order to expand my destkop. Works great. Till today that I desided to connect my LG screen to a KVM switch in order to share the big screen with an other pc when I need it. In the KVM switch the resolution is lower and I can not manually change it. I read many solutions about making an .conf file but since I am new to Ubuntu I am afraid , more over I realized that these articles talk for the same problem but not as an expantion screen but as a main screen. Any idea how to config correct this file? I send the links of these articles here: http://robert.penz.name/219/workarou...-kvm-switches/ Where is the X.org config file? How do I configure X there? Thank you!

    Read the article

  • Oracle saves with Oracle Database 11g and Advanced Compression

    - by jenny.gelhausen
    Oracle Corporation runs a centralized eBusiness Suite system on Oracle Database 11g for all its employees around the globe. This clustered Global Single Instance (GSI) has scaled seamlessly with many acquisitions over the years, doubling the number of employees since 2001 and supporting around 100,000 employees today, 24 hours a day, 7 days a week around the world. In this podcast, you'll hear from Raji Mani, IT Director for Oracle's PDIT Group, on how Oracle Database 11g and Advanced Compression is helping to save big on storage costs. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Score Awesome Games on the Cheap with Humble Indie Bundle 6

    - by Jason Fitzpatrick
    It’s the Humble Indie Bundle time of year again; score six great games at a name-your-own-price including acclaimed action-RPG Torchlight. The Humble Indie Bundle combines games from independent development houses into a big promotional pack where gamers can name their own price and choose how much of that price goes towards the developers or gaming-related charities. Included in this bundle are: Dustforce, Rochard, Shatter, S.P.A.Z., Torchlight, and Vessel. Torchlight 2, the followup to the wildly popular Torchlight, is set for release in a scant two days–now is the perfect time to pick up a copy of Torchlight on the cheap and get yourself up to speed. The Humble Indie Bundle is cross-platform and DRM-free. Grab a copy and enjoy it on your Windows, Mac, or Linux machine without any registration hassles. The Humble Indie Bundle 6 How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Digital Asset Management System

    - by Prashant
    I am looking for an opensource web-based digital asset management system. My requirements are to create a web based system where users can upload and download .zip, .jpg, .png, .pdf, .doc, .xls etc. media files. Also user management should be there, so that we can create multiple users and accordingly give them permissions. I have found one http://www.resourcespace.org/ but it looks a bit big and complicated. It is fitting to my need but I am looking and researching a bit more to get some good and more easy to use system. If anyone knows such web based system or tool, please share.

    Read the article

  • What can I do with dynamic typing that I can not do with static typing

    - by Justin984
    I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is why it's useful. I keep hearing about its "flexibility" but it seems like it just moves a bunch of compile time checks to runtime, which means more unit tests. This seems like an awfully big tradeoff to make for small advantages like readability and "flexibility". Can someone provide me with a real world example where dynamic typing allows me to do something I can't do with static typing?

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Gnome extensions stay in the list after being removed

    - by SingerOfTheFall
    I've got a little issue with gnome shell extensions. After installing some of them, I understood I didn't like them and decided to remove them. The extensions themselves (their folders in /home/username/.local/share/gnome-shell/extensions) were deleted successfully. However, the deleted extensions were not removed from the list of installed extensions at extensions.gnome.org. They also were not removed from the list in gnome-tweak-tool. So now in my list I have a bunch of extensions that I have already deleted. The funny thing is that I can't reinstall them too, since both the gnome-tweak-tool and the website think they are still there. This isn't a big deal of course, but I find it to be a little annoying. Reinstalling gnome-tweak-tool didn't help. Is there a way to somehow update the status of installed extensions?

    Read the article

  • General input/output error on OpenOffice 3.2

    - by Scott
    I have been working on a research paper in OpenOffice 3.2 Word Processor and I had a slide show OpenOffice 3.2 Presentation. I saved both frequently and they were not very big (no more then 5 pages of text or a few slides). A few days after I last edited and saved them, I tried to open the slide show but I received window saying: General Error. General input/output error When I tried opening the paper, it asked for ASCII Filter Options (font, character, language, and paragraph break). When I set the normal settings, a single blank document opened with nothing on it. I looked at both of the file's properties and the size was 0 or 1.5 KB, and the volume was unknown. I recently installed some updates for Ubuntu 10.10 and everything else in my computer is working normally including other documents and slide shows. Is this a fixable crash?

    Read the article

  • Beat detection, weird detection

    - by Quincy
    I made this soundanalyzer class to detect beats in songs : // put it on pastebin for the big size, will put it here if people rather want that. pastebin.com/8PdgZPP3 but for some reason its only detecting beats from 637 sec to around 641(sec) and I have no idea why. I know the beats are being inserted from multiple bands since I am finding duplicates and it seems as its assigning a beat to each instant energy value in between those values. Its modeled after this : http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf So why won't the beats properly register ?

    Read the article

  • SQL SERVER – Video – Performance Improvement in Columnstore Index

    - by pinaldave
    I earlier wrote an article about SQL SERVER – Fundamentals of Columnstore Index and it got very well accepted in community. However, one of the suggestion I keep on receiving for that article is that many of the reader wanted to see columnstore index in the action but they were not able to do that. Some of the readers did not install SQL Server 2012 or some did not have good machine to recreate the big table involved in the demo. For the same reason, I have created small video for that. I have written two more article on columstore index. Please read them as followup to the video: SQL SERVER – How to Ignore Columnstore Index Usage in Query SQL SERVER – Updating Data in A Columnstore Index Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Building Java EE in the Cloud–Webcast August 30th 2012

    - by JuergenKress
    Building Java EE in the Cloud Thursday, August 30, 2012 at 10:00 AM PDT While cloud computing is making big strides, there are challenges in enterprises to start meaningful adoption of the technology. Steps to cloud adoption by the enterprises are different - often with varying results. Some choose to optimize current applications with basic steps, resulting in minimal benefits. Others transform the entire portfolio, with a complete architecture overhaul, and build business agility as a result. Join Anand Kothari, Principal Product Manager for Oracle Cloud Java Service and Larry Carvalho, Principal Analyst at Robust Cloud, LLC as they discuss the evolution of the cloud, and how Oracle Cloud Java Service enables enterprises to make transformational changes. Speakers: Anand Kothari and Larry Carvalho For details please visit the registration page WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Anand Kothari,Larry Carvalho,Java,Java EE6,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,Cloud

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • SUPINFO International University in Mauritius

    Since a while I'm considering to pick up my activities as a student and I'd like to get a degree in Computer Science. Personal motivation I mean after all this years as a professional software (and database) developer I have the personal urge to complete this part of my education. Having various certifications by Microsoft and being awarded as an Microsoft Most Valuable Professional (MVP) twice looks pretty awesome on a resume but having a "proper" degree would just complete my package. During the last couple of years I already got in touch with C-SAC (local business school with degree courses), the University of Mauritius and BCS, the Chartered Institute for IT to check the options to enroll as an experienced software developer. Quite frankly, it was kind of alienating to receive that feedback: Start from scratch! No seriously? Spending x amount of years to sit for courses that might be outdated and form part of your daily routine? Probably being in an awkward situation in which your professional expertise might exceed the lecturers knowledge? I don't know... but if that's path to walk... Well, then I might have to go for it. SUPINFO International University Some weeks ago I was contacted by the General Manager, Education Recruitment and Development of Medine Education Village, Yamal Matabudul, to have a chat on how the local IT scene, namely the Mauritius Software Craftsmanship Community (MSCC), could assist in their plans to promote their upcoming campus. Medine went into partnership with the French-based SUPINFO International University and Mauritius will be the 36th location world-wide for SUPINFO. Actually, the concept of SUPINFO is very likely to the common understanding of an apprenticeship in Germany. Not only does a student enroll into the programme but will also be placed into various internships as part of the curriculum. It's a big advantage in my opinion as the person stays in touch with the daily procedures and workflows in the real world of IT. Statements like "We just received a 'crash course' of information and learned new technology which is equivalent to 1.5 months of lectures at the university" wouldn't form part of the experience of such an education. Open Day at the Medine Education Village Last Saturday, Medine organised their Open Day and it was the official inauguration of the SUPINFO campus in Mauritius. It's now listed on their website, too - but be warned, the site is mainly in French language although the courses are all done in English. Not only was it a big opportunity to "hang out" on the campus of Medine but it was great to see the first professional partners for their internship programme, too. Oh, just for the records, IOS Indian Ocean Software Ltd. will also be among the future employers for SUPINFO students. More about that in an upcoming blog entry. Open Day at Medine Education Village - SUPINFO International University in Mauritius Mr Alick Mouriesse, President of SUPINFO, arrived the previous day and he gave all attendees a great overview of the roots of SUPINFO, the general development of the educational syllabus and their high emphasis on their partnerships with local IT companies in order to assist their students to get future jobs but also feel the heartbeat of technology live. Something which is completely missing in classic institutions of tertiary education in Computer Science. And since I was on tour with my children, as usual during weekends, he also talked about the outlook of having a SUPINFO campus in Mauritius. Apart from the close connection to IT companies and providing internships to students, SUPINFO clearly works on an international level. Meaning students of SUPINFO can move around the globe and can continue their studies seamlessly. For example, you might enroll for your first year in France, then continue to do 2nd and 3rd year in Canada or any other country with a SUPINFO campus to earn your bachelor degree, and then live and study in Mauritius for the next 2 years to achieve a Master degree. Having a chat with Dale Smith, Expand Technologies, after his interesting session on Technological Entrepreneurship - TechPreneur More questions by other craftsmen of the Mauritius Software Craftsmanship Community And of course, this concept works in any direction, giving Mauritian students a huge (!) opportunity to live, study and work abroad. And thanks to this, Medine already announced that there will be new facilities near Cascavelle to provide dormitories and other facilities to international students coming to our island. Awesome! Okay, but why SUPINFO? Well, coming back to my original statement - I'd like to get a degree in Computer Science - SUPINFO has a process called Validation of Acquired Experience (VAE) which is tailor-made for employees in the field of IT, and allows you to enroll in their course programme. I already got in touch with their online support chat but was only redirected to some FAQs on their website, unfortunately. So, during the Open Day I seized the opportunity to have an one-on-one conversation with Alick Mouriesse, and he clearly encouraged me to gather my certifications and working experience. SUPINFO does an individual evaluation prior to their assignment regarding course level, and hopefully my chances of getting some modules ahead of studies are looking better than compared to the other institutes. Don't get me wrong, I don't want to go down the easy route but why should someone sit for "Database 101" or "Principles of OOP" when applying and preaching database normalisation and practicing Clean Code Developer are like flesh and blood? Anyway, I'll be off to get my transcripts of certificates together with my course assignments from the old days at the university. Yes, I studied Applied Chemistry for a couple of years before intersecting into IT and software development particularly... ;-)

    Read the article

  • adding hard drive failed

    - by dennis ditch
    i was using von welch's instructions at http://v2kblog.blogspot.com/2007/05/adding-second-hard-drive.html] to install a 500 gig seagate drive to write the recordings to. everything seemed to be going ok until mkfs /dev/sdb1 then we get an error message mkfs.ext2: inode_size (128) * inodes_count (0) too big for a filesystem with 0 blocks, specify a higher inode_ratio (-i) or lower inode count (-N) my son is trying to help me but this is beyond him. our knowledge of unix/linux is very limited. at work the support people just sent me a line by line cook book. i would appreciate any help you can give us. the computer is a gateway mdp e4000 . mythbuntu is installed on a pata drive and we are adding a sata drive for the second drive. the bios sees the drive.

    Read the article

  • Is GParted safe?

    - by Olivier Pons
    I want to resize my partitions: I have 3 partitions: Ubuntu 10.04 Windows Seven Ubuntu 11.10 It's booting with the boot installed by the Ubuntu 11.10 version. I want to expand (only expand) all the 3 partitions. My HD is 1,8 Tb so it's big and I have no possibility to save before expanding. So my question is: if you tell me GParted work 99,99 % of the time, I'm willing to take the risk. If you tell me GParted work 90 % of the time, I won't take that risk.

    Read the article

  • How will Quantum computing affect us?

    - by CiscoIPPhone
    I am interested in quantum computing, but have not studied it in depth. Things like Shor's algorithm intrigue me. My question is: If quantum computing took off in a big way (i.e. functional quantum home computers were available) how would it affect us programmers and software developers? Would we have to learn how to make use of superposition and entanglement - would it change how we write algorithms? Would more mathematical programmers be required/would we need new skills? Would it change nothing at all from our perspective (i.e. would it be abstracted)? Your opinion is welcome.

    Read the article

  • Oracle VM Slides and Replay

    - by Alex Blyth
    Thanks everyone for attending the webcast on "Oracle VM and Virtualisation" last week. I know I got some useful info out of the session and on behalf of all those who attended I'll say Thank You to Dean Samuels for spending some time talking to us.Slides are available here Oracle VM - 28/04/2010 View more presentations from Oracle Australia. You can download the replay here. Next week's session is on Oracle Database Security and will cover briefly all the big guns like Transparent Data Encryption, Database Vault, Audit Vault, Flashback Data Archive as well as touching on some of the features that are so often skipped over. You can enroll for this session here. Thanks again Cheers Alex

    Read the article

  • How do I let programmers know something useful?

    - by Shane
    Quite often I go through pain to discover how to do something useful, in this case how to get PHP running on Google App Engine without the 'resin.jar too big' problem, but I have nowhere to let people know. In this particular case, the web answers were incomplete, outdated or plain wrong. I could create a blog, but it's overkill. I just want let programmers know a quick solution/approach. I'm not sure it's the done thing to ask a question and then answer it. In short, it'd be nice to be able to just dump a sort of combined question/answer/snippet, then let others comment as usual, shoot it down, like it, whatever. How do the rest of you approach this? Cheers, Shane

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >