Search Results

Search found 12325 results on 493 pages for 'remote execution'.

Page 261/493 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • InstantWild: Identify Animals From Around the World; Help Scientists

    - by Jason Fitzpatrick
    Web-based/iPhone: InstantWild is an iOS and web application that displays research cameras from around the world; help scientists by turning your eco-voyeurism into positive identification of endangered species. It’s a neat mashup of a fun application and legitimate research. There are hundreds of remote cameras set up around the world, designed to capture photographs of animals (especially endangered ones) in their native habitats. When you visit InstantWild (or load the app on your iPhone) you’re treated to pictures from all around the world. In the course of browsing those photos from around the world you can help out by tagging the animals in the photos to assist zoologists and other scientists in their research. Hit up the link below to check out the web-based version and even grab a copy for your phone. InstantWild [via Wired] Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed

    Read the article

  • Information about SATA, IDE (PATA) controllers

    - by Adam Matan
    I have a remote computer on which I want to install a new hard drive for rsync backup. The problem is, I don't know what controller technology is used (PATA, SATA, SATA2, ...) and how many available slots are left. I want to spare me an unnecessary drive just for opening the chassis and looking into wires. How do I query the SATA or PATA controllers? I'm interested in the following points: Which controllers exist in the machine How many (and which) disks are attached to each controller How many available slots are there

    Read the article

  • How can I mount an AFS filesystem?

    - by Ben
    My current method is to mount the filesystem via SSH using Nautilus's graphical interface, but I would much prefer to be able to use some tool that mounts the AFS filesystem and gives me access to AFS-specific features (permissions, etc.). I've tried installing OpenAFS via apt-get, but so far the kernel module has refused to compile. Also, assuming I get OpenAFS installed, I'm not quite sure how to actually mount the remote filesystem to, say, /media/afs or some directory. I'm running Maverick with the 2.6.36-020636-generic kernel from http://kernel.ubuntu.com/~kernel-ppa/mainline/ Thanks for the help!

    Read the article

  • Product Support Webcast for Existing Customers:Getting the Most from My Oracle Support, Tips and Tricks for WebCenter Content

    - by John Klinke
    My Oracle Support (MOS) is the one-stop support solution for WebCenter customers with Oracle Premier Support. Join us for this 1-hour Advisor Webcast "Getting the Most from My Oracle Support, Tips and Tricks for WebCenter Content" on July 11, 2013 at 11:00am Eastern (16:00 UK / 17:00 CET / 8:00am Pacific / 9:00am Mountain) Topics will include:- My Oracle Support Search, Advanced Search, and PowerViews- Information Centers- Latest Patches and Bundle Patches- My Oracle Support Community- Remote Diagnostic Administration (RDA) Make sure to register and mark this date on your calendar. Register here: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=594341268&t=aOnce your registration request is approved, you will receive a confirmation email with instructions for joining the webcast on July 11. Past Advisor Webcasts have been recorded and can be viewed by going to the 'archived' tabs on this knowledge base announcement:https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1456204.1 (active support contract required)

    Read the article

  • Ubuntu Server 12 HD full

    - by julio
    I have a server with Ubuntu Server 12, today it stops some services and I found some comments about full disk, so I made a df -h S.files Size Used Disp Use% /dev/mapper/ubuntu-root 455G 434G 0 100% / udev 1,7G 4,0K 1,7G 1% /dev tmpfs 689M 4,2M 685M 1% /run none 5,0M 0 5,0M 0% /run/lock none 1,7G 0 1,7G 0% /run/shm /dev/sda1 228M 51M 166M 24% /boot overflow 1,0M 0 1,0M 0% /tmp Then I tried to delete some files but I made it from a windows remote computer just right clic and "delete" option on files, but HD still full. Is in Ubuntu Server any Trash file or what could be happeing?

    Read the article

  • Sending keystrokes using Python

    - by Rudi Strydom
    I am trying to build a remote control application to control media on my Ubuntu. Does anyone know a way in order to accomplish this. The media keys in particular. Thank you. EIDT: I have tried using XTE, but is seems python in truncating the input or there is a limit or something which means that you can't do Ctrl + Key key presses, which wont suit my needs. I also tried uinput, but alas you need to run it as root, which also will not quite my needs. Now I am looking at EVDEV which seems promicing, that is if I can get it working.

    Read the article

  • How to create a restricted SSH user for port forwarding?

    - by Lekensteyn
    ændrük suggested a reverse connection for getting an easy SSH connection with someone else (for remote help). For that to work, an additional user is needed to accept the connection. This user needs to be able to forward his port through the server (the server acts as proxy). How do I create a restricted user that can do nothing more than the above described? The new user must not be able to: execute shell commands access files or upload files to the server use the server as proxy (e.g. webproxy) access local services which were otherwise not publicly accessible due to a firewall kill the server Summarized, how do I create a restricted SSH user which is only able to connect to the SSH server without privileges, so I can connect through that connection with his computer?

    Read the article

  • Ransomware: Why This New Malware is So Dangerous and How to Protect Yourself

    - by Chris Hoffman
    Ransomware is a type of malware that tries to extort money from you. One of the nastiest examples, CryptoLocker, takes your files hostage and holds them for ransom, forcing you to pay hundreds of dollars to regain access. Most malware is no longer created by bored teenagers looking to cause some chaos. Much of the current malware is now produced by organized crime for profit and is becoming increasingly sophisticated. How Ransomware Works Not all ransomware is identical. The key thing that makes a piece of malware “ransomware” is that it attempts to extort a direct payment from you. Some ransomware may be disguised. It may function as “scareware,” displaying a pop-up that says something like “Your computer is infected, purchase this product to fix the infection” or “Your computer has been used to download illegal files, pay a fine to continue using your computer.” In other situations, ransomware may be more up-front. It may hook deep into your system, displaying a message saying that it will only go away when you pay money to the ransomware’s creators. This type of malware could be bypassed via malware removal tools or just by reinstalling Windows. Unfortunately, Ransomware is becoming more and more sophisticated. One of the latest examples, CryptoLocker, starts encrypting your personal files as soon as it gains access to your system, preventing access to the files without knowing the encryption key. CryptoLocker then displays a message informing you that your files have been locked with encryption and that you have just a few days to pay up. If you pay them $300, they’ll hand you the encryption key and you can recover your files. CryptoLocker helpfully walks you through choosing a payment method and, after paying, the criminals seem to actually give you a key that you can use to restore your files. You can never be sure that the criminals will keep their end of the deal, of course. It’s not a good idea to pay up when you’re extorted by criminals. On the other hand, businesses that lose their only copy of business-critical data may be tempted to take the risk — and it’s hard to blame them. Protecting Your Files From Ransomware This type of malware is another good example of why backups are essential. You should regularly back up files to an external hard drive or a remote file storage server. If all your copies of your files are on your computer, malware that infects your computer could encrypt them all and restrict access — or even delete them entirely. When backing up files, be sure to back up your personal files to a location where they can’t be written to or erased. For example, place them on a removable hard drive or upload them to a remote backup service like CrashPlan that would allow you to revert to previous versions of files. Don’t just store your backups on an internal hard drive or network share you have write access to. The ransomware could encrypt the files on your connected backup drive or on your network share if you have full write access. Frequent backups are also important. You wouldn’t want to lose a week’s worth of work because you only back up your files every week. This is part of the reason why automated back-up solutions are so convenient. If your files do become locked by ransomware and you don’t have the appropriate backups, you can try recovering them with ShadowExplorer. This tool accesses “Shadow Copies,” which Windows uses for System Restore — they will often contain some personal files. How to Avoid Ransomware Aside from using a proper backup strategy, you can avoid ransomware in the same way you avoid other forms of malware. CryptoLocker has been verified to arrive through email attachments, via the Java plug-in, and installed on computers that are part of the Zeus botnet. Use a good antivirus product that will attempt to stop ransomware in its tracks. Antivirus programs are never perfect and you could be infected even if you run one, but it’s an important layer of defense. Avoid running suspicious files. Ransomware can arrive in .exe files attached to emails, from illicit websites containing pirated software, or anywhere else that malware comes from. Be alert and exercise caution over the files you download and run. Keep your software updated. Using an old version of your web browser, operating system, or a browser plugin can allow malware in through open security holes. If you have Java installed, you should probably uninstall it. For more tips, read our list of important security practices you should be following. Ransomware — CryptoLocker in particular — is brutally efficient and smart. It just wants to get down to business and take your money. Holding your files hostage is an effective way to prevent removal by antivirus programs after it’s taken root, but CryptoLocker is much less scary if you have good backups. This sort of malware demonstrates the importance of backups as well as proper security practices. Unfortunately, CryptoLocker is probably a sign of things to come — it’s the kind of malware we’ll likely be seeing more of in the future.     

    Read the article

  • Pair programming remotely with Visual Studio?

    - by shamp00
    What tools exist to facilitate pair programming with Visual Studio when the programmers are not in the same physical location? At the moment we are thinking voice (Skype?) plus remote desktop (VNC? TeamViewer?), but it would be good to know of other suggestions and experiences. Also, is there anything more integrated with Visual Studio? A bit more background: we are two experienced developers with who have collaborated well for a long time on a large mature project (ASP.NET, Windows Forms and SQL Server). However we are not usually working on the same part of the code base at the same time. We intend to spend some weeks doing substantial refactoring and it would be ideal if we were able to do this work with a pair-programming approach.

    Read the article

  • EPM Architecture: Foundation

    - by Marc Schumacher
    This post is the first of a series that is going to describe the EPM System architecture per component. During the following weeks a couple of follow up posts will describe each component. If applicable, the component will have its standard port next to its name in brackets. EPM Foundation is Java based and consists of two web applications, Shared Services and Workspace. Both applications are accessed by browser through Oracle HTTP Server (OHS) or Internet Information Services (IIS). Communication to the backend database is done by JDBC. The file system to store Lifecycle Management (LCM) artifacts can be either local or remote (e.g. NFS, network share). For authentication purposes, the EPM Product Suite can connect to external directories or databases. Interaction with other EPM Suite components like product specific Lifecycle Management connectors or Reporting and Analysis Web happens through HTTP protocol. The next post will cover Reporting and Analysis.

    Read the article

  • Upgraded to 11.10 lost personal folders, Ubuntu one shows no files

    - by Kevin
    Upgraded to 11.04, from 10.10 system would only come up in terminal mode, but it told me that an additional upgrade was available and did I want to do that. Foolishly thinking that might fix the problem, I said yes. This time it did not make it all the way through the upgrade, when I came back to the computer over an hour later, the screen was filled with an error message "could not open display", had to reboot. Went to recovery mode on reboot to install nvidia module, when I rebooted system came up fine, but without carrying over my personal folders, I have the home folder, but no personal named folder in it. Came to Ubuntu One, but gives error message; File Sync error. (org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked Is the a way around this in order to restore my files? I know my files existed on Ubuntu one as of a few months ago.

    Read the article

  • AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups

    SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution that improves upon legacy functionality previously found across disparate features. Prior to SQL Server 2012, several customers used database mirroring to provide local high availability within a data center, and log shipping for disaster recovery across a remote data center. With SQL Server 2012, this common design pattern can be replaced with an architecture that uses availability groups for both high availability and disaster recovery. This paper details the key topology requirements of this specific design pattern, including quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery event in the new topology.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Tab Sweep - Coherence, SBT for GlassFish, OSGi in question, Java EE plugins, ...

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Oracle Coherence Team Blog (blogs.oracle.com) • JSF Nightlies (Ed) • Setting up Mobile Server with GlassFish (Greg) • Deploying to remote Glassfish from SBT (Vasil) • OSGi (Jarda) • Building Plugins with Java EE 6 (Adam) • Application Entreprise JSF2 avec Maven ... (simplicity2k) • Project Coin at Devoxx 2011 (Joe)

    Read the article

  • How to prevent code from leaking outside work?

    - by AeroCross
    I'm working on an institution that has a really strong sense of "possession" - each line of software we write should be only ours. Ironically, I'm the only programmer (ATM), but we're planning in hiring others. Since my bosses wouldn't count the new programmers as people they can trust, they have an issue with the copies of the source code. We use Git, so they would have a entire copy of each of the projects they work on, when they clone the repository. We can restrict access to them to a single key with Gitolite and bind that to their PC's, but they can copy those keys to another computer and they would have the repository access in another PC. Also (and the most obvious method) they could just upload the files somewhere else, add another remote, or just copy the files to an USB drive. Is there any (perhaps clever) way to prevent events like these?

    Read the article

  • Flaws in my PHP development setup - sharing sources causing lags

    - by Wiktor
    I have following development setup for my PHP projects: Working station running on Windows 7 with PhpStorm IDE. GIT for version controlling. CentOS on virtual machine (VirtualBox) with Apache and MySQL (copy of production server). So far, I've been sharing project's source folders between host and guest systems and it was working quite well only really slow. The reason behind this is that Apache was reading files from remote folder (mounted locally). After doing some research, I found out that this set up can be improved by using disk mapping (Samba) instead of folder sharing. So I did that change. I configured my PhpStorm to automatically deploy files to mapped drive. Everything works like a charm now, except for one problem - when I change branches I need to synchronize project's local folder with the one on mapped drive and that takes time, a lot of time (like branching in SVN). Is there another way to handle this than just working on files directly on mapped drive?

    Read the article

  • Code review process when using GIT as a repository?

    - by Sid
    What is the best process for code review when using GIT? Current process: We have a GIT server with a master branch to which everyone commits Devs work off the local master mirror or a local feature branch Devs commit to server's master branch Devs request code review on last commit Problem: Any bug in code review are already in master by the time it's caught. Worse, usually someone has burnt a few hours trying to figure out what happened... So, we would like To do code review BEFORE delivery into the 'master'. Have a process that works with a global team (no over the shoulder reviews!) something that doesn't require an individual dev to be at his desk/machine to be powered up so someone else can remote in (remove human dependency, devs go home at different timezones) We use TortoiseGIT for a visual representation of a list of files changed, diff'ing files etc. Some of us drop into a GIT shell when the GUI isn't enough, but ideally we'd like the workflow to be simple and GUI based (I want the tool to lift any burden, not my devs).

    Read the article

  • How to create a shared folder using command line on a server

    - by sadmicrowave
    After following the tutorial here I ran into a problem. Here is what I did. On my server I installed nfs-kernel-server and edited the /etc/exports file to include the folder I want to share: /var *(rw,sync) On my client machine I edited my fstab file to include share: //128.251.xxx.xxx/var/ ~/uslonsweb003 nfs #username=[username],password=[password], 0 0 Entered command: sudo mount -a which gives this error: mount.nfs: remote share not in 'host:dir' format Where did I go wrong with this setup? Also if there is a better way (using command line) to setup a folder share on an Ubuntu 10.10 server that will be accessed by other linux and windows machines please let me know. UPDATE: The mapped drive is now not letting me create,edit,delete files or folders (readonly access) my configuration is as follows: client fstab file: 128.251.xxx.xxx:/var /home/coreyf/uslonsweb003 nfs rw,hard,intr, 0 0 server exports file: /var *(rw,no_root_squash,sync,no_subtree_check) UPDATE 2: Using Allans solution my drive mounted correctly however after putting rw,intr as my additional parameters I cannot create, edit and delete folders/files.

    Read the article

  • Intermittent Copy/Paste Problem in RDP

    - by Tara Kizer
    If you use RDP to remotely connect to your servers, you've probably encountered a clipboard issue where copy/paste stops working.  A quick Google search on the problem indicates you can easily fix the problem by logging out/logging back in or killing/restarting rdpclip.exe on the remote server.  Here's an article which covers this topic. But what do you do when copy/paste is intermittent?  It works one second, stops working for 5-30 seconds, and then on its own starts working again.  This is what’s occurring in our new non-production environment.  The DBA team is setting up 16 new physical servers and 5 new virtual machines.  I haven’t found a server where this ISN’T happening.  This intermittent copy/paste issue is driving me crazy!

    Read the article

  • Why UFW has to be (re)started at boot time if it's only iptables rule manager?

    - by Tomasz Zielinski
    README from source package says: When installing ufw from source, you will also need to integrate it into your boot process for the firewall to start when you restart your system. Depending on your needs, this can be as simple as adding the following to a startup script (eg rc.local for systems that use it): # /lib/ufw/ufw-init start For systems that use SysV initscripts, an example script is provided in doc/initscript.example. See doc/upstart.example for an Upstart example. Consult your distribution's documentation for the proper way to modify your boot process. On my system I have this: # /etc/ufw/ufw.conf # # Set to yes to start on boot. If setting this remotely, be sure to add a rule # to allow your remote connection before starting ufw. Eg: 'ufw allow 22/tcp' ENABLED=yes So, why does simple iptables rule manager need to be started at boot time? Is there any secret to that, or it merely checks if all rules are in place ?

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • WebCenter Sites 11gR1 Bundled Patch 1 is now available

    - by R.Hunter
    There is a new patch available for WebCenter Sites - 11gR1 Bundled Patch 1. The download links can be obtained from the WebCenter Sites Download page. Some of the highlights of WebCenter Sites 11gR1 Bundled Patch 1 are listed below: - UI Customization support  - A new developer’s guide is available for use in customizing the Contributor UI. Customizable UI components include the Dashboard, search views, tools bars, menus, and asset-forms. In addition, global or site specific configuration properties can be specified for controlling what is displayed in the UI. - Localization support – The contributor UI is localized for the following languages: French, German, Italian, Spanish, Brazilian Portuguese, Japanese, Korean, Simplified &Traditional Chinese - Developer tools (CSDT) now supports connection to a remote Sites server- Security updates including a request authentication filter to prevent CSRF attacks, REST API updates, and more.- Session replication support in the management user interfaces- Bug fixes Please refer to the release notes and documentation for more information.

    Read the article

  • OPN Developer Services for Solaris Developers

    - by user13333379
    Independent Software Vendors (ISVs) who develop applications for Solaris 11 can exploit a number of interesting services as long as they are OPN Members with a Gold (or above) status and a Solaris Knowledge specialization: Free access to a Solaris development cloud with preconfigured Solaris developer zones through the apply for the: Oracle Exastack Remote Labs to get free access to Solaris development environments for SPARC and x86. Free access to patches and support information through MOS for Oracle Solaris, Oracle Solaris Studio, Oracle Solaris Cluster including updates for development systems  apply for the Oracle Solaris Development Initiative. Free email developer support for all questions around Oracle Solaris, Oracle Solaris Studio, Oracle Solaris Cluster and Oracle technologies integrating with Solaris 11 apply for the Solaris Adoption Technical Assistance.  

    Read the article

  • Git does not ask for passphrase during pull/push in terminal

    - by Damian
    I'm trying to use git from the terminal in my Ubuntu 12.04 desktop. My repository is hosted in Github, and I have the a key for my desktop. Whenever I do either "git pull" or "git push," a dialog box will pop up asking for my passphrase. This works fine if I type the passphrase correctly. However, if I'm connected to my desktop through ssh and do a git pull or push, the command does not prompt the passphrase and it outputs the following error: Permission denied (publickey). fatal: The remote end hung up unexpectedly This error makes sense because I'm not inputting my passphrase. So the question is, how can I get the passphrase prompted in the terminal? Thanks!

    Read the article

  • Sound notification over SSH

    - by Lekensteyn
    I just switched from the Konversation IRC client to the terminal based IRSSI. I'm starting IRSSI on a remote machine using GNU screen + SSH. I do not get any sound notification on new messages, which means that I've to check out IRSSI once in a while for new messages. That's not really productive, so I'm looking for an application / script that plays a sound (preferably /usr/share/sounds/KDE-Im-Irc-Event.ogg and not the annoying beep) on my machine if there is any activity. It would be great if I can disable the notification for certain channels. Or, if that's not possible, some sort of notification via libnotify, thus making it available to GNOME and KDE.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >