Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 471/1485 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • We'll be at QCon San Francisco!

    - by Carlos Chang
    Oracle Technology Network is a Platinum sponsor at QCon San Francisco. Don’t miss these great developer focused sessions: Shay Shmeltzer - How we simplified Web, Mobile and Cloud development for our own developers? - the Oracle Story Over the past several years, Oracle has beendeveloping a new set of enterprise applications in what is probably one of the largest Java based development project in the world. How do you take 3000 developers and make them productive? How do you insure the delivery of cutting edge UIs for both Mobile and Web channels? How do you enable Cloud based development and deployment? Come and learn how we did it at Oracle, and see how the same technologies and methodologies can apply to your development efforts. Dan Smith - Project Lambda in Java 8 Java SE 8 will include major enhancements to the Java Programming Language and its core libraries.  This suite of new features, known as Project Lambda in the OpenJDK community, includes lambda expressions, default methods, and parallel collections (and much more!).  The result will be a next-generation Java programming experience with more flexibility and better abstractions.   This talk will introduce the new Java features and offer a behind-the-scenes view of how they evolved and why they work the way that they do. Arun Gupta - JSR 356: Building HTML5 WebSocket Applications in Java The family of HTML5 technologies has pushed the pendulum away from rich client technologies and toward ever-more-capable Web clients running on today’s browsers. In particular, WebSocket brings new opportunities for efficient peer-to-peer communication, providing the basis for a new generation of interactive and “live” Web applications. This session examines the efforts under way to support WebSocket in the Java programming model, from its base-level integration in the Java Servlet and Java EE containers to a new, easy-to-use API and toolset that are destined to become part of the standard Java platform. The complete conference schedule is here: http://qconsf.com/sf2012/schedule/wednesday.jsp But wait, there’s more! At the Oracle booth, we’ll also be covering: Oracle ADF Mobile Oracle Developer Cloud Service Oracle ADF Essentials NetBeans Project Easel Hope to see you there! 

    Read the article

  • Backup Exec backup-to-disk folder creation - Access denied

    - by ewwhite
    I'm having a difficult time creating a backup-to-disk folder in Symantec Backup Exec 12.5 and Backup Exec 2010. The backend storage is a Nexenta/ZFS-based NAS filer sharing the volume via CIFS. I've also seen the issue on other *nix-based NAS devices. I've attempted mapping the drive, providing the full paths to the folder, etc. I can browse to the share just fine from within Windows, but Backup Exec fails to create the B2D folder with different variants of a Unable to create new backup folder. Access denied error. I've attempted creating service accounts in Backup Exec to handle the authentication, but nothing seems to work. What's the key to making this work?

    Read the article

  • Booting Windows7 kernel from an initrd/wim image file

    - by Ivo
    I'm wondering if it's possibile to have Win7 kernel and relative drivers (especially storage drivers) to boot from an initrd-like image file (maybe .wim?) and later then mount the windows root partition and complete the load of the full OS? I'll try to explain why: I'm running an emulated environment with NO REAL BIOS, and I'm passingthrough a raid storage controller. I want windows to boot from this controller array, but of course the BCD manager cannot access disks in the array until kernel and relative controller storage drivers are loaded. To be clear I get the classical winload.exe missing error. I need a similar solution to what Linux does, loading the kernel and his drivers, and later then mount the root partition and complete the boot. Any ideas or advices?

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • vpn/Openvpn as a cloud service

    - by 8pipe
    I am working on creating a small cloud (any number of EC2 instances that can be deployed based on load) implementing a VPN as a service for the company I'm working for. This is basically a project gathering together various vpn resources under one aegis as a cloud based service. As a user of openvpn, I'm somewhat familiar with being able to connect, but I'm looking for resources to start this project. Essentially I need to be able to: run a certificate authority and manage keys to distribute to coworkers build an ami that handles openvpn as a service balance the load if necessary among machines instances as needed Any suggestions for tutorials, things to avoid, roadblocks I might not be seeing from a novice perspective, etc. or just help in visualizing this is appreciated.

    Read the article

  • Authorization design-pattern / practice?

    - by Lawtonfogle
    On one end, you have users. On the other end, you have activities. I was wondering if there is a best practice to relate the two. The simplest way I can think of is to have every activity have a role, and assign every user every role they need. The problem is that this gets really messy in practice as soon as you go beyond a trivial system. A way I recently designed was to have users who have roles, and roles have privileges, and activities require some combinations of privileges. For the trivial case, this is more complex, but I think it will scale better. But after I implemented it, I felt like it was overkill for the system I had. Another option would be to have users, who have roles, and activities require you to have a certain role to perform with many activities sharing roles. A more complex variant of this would given activities many possible roles, which you only needed one of. And an even more complex variant would be to allow logical statements of role ownership to use an activity (i.e. Must have A and (B exclusive or C) and must not have D). I could continue to list more, but I think this already gives a picture. And many of these have trade offs. But in software design, there are oftentimes solutions, while perhaps not perfect in every possible case, are clearly top of the pack to an extent it isn't even considered opinion based (i.e. how to store passwords, plain text is worse, hashing better, hashing and salt even better, despite the increased complexity of each level) (i.e. 2, Smart UI designs for applications are bad, even if it is subjective as to what the best design is). So, is there a best practice for authorization design that is not purely opinion based/subjective?

    Read the article

  • Linux,Apache,NetBeans,PHP == Windows,IIS/Cassini,Visual Studio,ASP.Net

    - by Neil Smith
    I've worked out how to get my linux based Netbeans PHP development machine to behave much like what happens when you create a new ASP.Net project in Visual Studio. Firstly create multiple PHP project in Netbeans,say for example mysite1 and mysite2. Next edit the apache2/sites-enabled/000-default file and add two virtualhost sections as below <VirtualHost 127.0.1.1> ServerName mysite1.localhost DocumentRoot /var/www/mysite1/ </VirtualHost> <VirtualHost 127.0.2.1> ServerName mysite2.localhost DocumentRoot /var/www/mysite2/ </VirtualHost> For each site you add, pick a different ip address similar to the above where I use the third octet to increment, next edit the etc/hosts file and add the following two lines 127.0.1.1 mysite1.localhost 127.0.2.1 mysite2.localhost Then in Netbeans, go to File->Project Properties click on 'Run Configuration' and set 'Project Url' to http://mysite1.localhost for the first project and http://mysite2.localhost for the second project. That will give you a PHP development box which develops multiple PHP projects similar to how a Visual Studio Windows based box handles multiple ASP.Net sites. Hope this helps someone :)

    Read the article

  • Dual-WAN router

    - by aix
    I am looking for a router that would fit the following requirements: Two WAN interfaces: the primary is PPPoE, the secondary will link to a GigE port on another router (a 100Mbps link will suffice); Two (ideally four) GigE LAN ports; No requirement for a firewall; No requirement for Wi-Fi; Inexpensive. The plan for the two WAN interfaces is as follows. All outbound traffic will go to the primary, with exceptions based on destination IP/subnet or possibly on src+dest IPs/subnets. Such exceptions should be routed to the secondary. It would be very nice if, should the primary go down, the secondary would automatically take over for all outbound traffic. I am reasonably sure that I can put something together based on dd-wrt. However, I'd like to hear from you what alternatives are out there (especially something easier to set up for my use case, even if it means paying more for the hardware.)

    Read the article

  • How can I "diff" two files with Nautilus?

    - by bioShark
    I have installed Meld and found out it's a great comparing tool. Unfortunately there is no integration with Nautilus 3.2. This means, I can't right click on files and select an option to open them in Meld for comparison. I have seen in the tools comment that the tool need the diff-ext package to be installed. This package has been removed from Ubuntu universe, I am guessing because gtk 3.0. Even if I manually downloaded from source forge the diff-ext package, when I try to configure it the check fails with the message: checking for DIFF_EXT... configure: error: Package requirements (libnautilus-extension >= 2.14.0 gconf-2.0 >= 2.14.0 gnome-vfs-module-2.0 >= 2.14) were not met: No package 'libnautilus-extension' found No package 'gconf-2.0' found No package 'gnome-vfs-module-2.0' found Ok, so from this output I gather that indeed gtk 2 is being required to install the diff extension to nautilus. Now, my question is: Is there a possibility to integrate Meld into Nautilus? Or, are there any other diff based tool which integrate with current Nautilus? So gtk3 based. I am using Ubuntu 11.10 if there was any doubt so far. cheers and thanks in advance.

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • What emulator / VM software can I use to create a Win32-portable Linux Guest?

    - by Jotham
    Hi, I want to create a portable VM setup so that I can boot a Linux install regardless of which Windows XP / Windows 7 host machine I am on. I was looking at Qemu but it doesn't appear to have a relatively safe win32 build. Other things like VirtualBox require complete install on the host OS for performance reasons. I'm not so concerned about performance, I just want to run a few curses based applications. My ideal end goal would be a a memory stick of some size with a VM/Emulator I can boot on most WinXP/Windows 7 machines and access my own curses based applications (probably archlinux or debian). Any help would be appreciated. Regards,

    Read the article

  • Open Source Software Development Center at University of Belgrade

    - by Tori Wieldt
    A new Open Source Software Development Center is open at University of Belgrade, Serbia. It centers around using Java & NetBeans as open source projects to learn from and contribute to. Assistant Professor Zoran Sevarac says that not only does the center allow him to teach software development using open source projects, but also "we are improving our University courses based on the experience we get from working on open source code."  Some of the projects underway are a NetBeans UML plugin; Neuroph (a Java neural network framework, with a NetBeans Platform-based UI); a NetBeans DOAP Plugin; WorkieTalkie (NetBeans chat plugin); and 2D and 3D visualization plugins for NetBeans. University of Belgrade also has an official university course about open source development, where students learn to use development tools, work in teams, participate in open source projects and learn from real world software development projects. Students, teachers, and researchers at the University of Belgrade, and any member of the open source community are welcome to come to learn software development from successful open source projects. For more information, you can contact Zoran Sevarac (@neuroph on Twitter).

    Read the article

  • How IBM Implement WebSphere Application Server SDK for Sun Solaris OS

    - by Eng Al-Rawabdeh
    I deploy the same application in IBM-WAS on different OS ( Windows , AIX and SUN-Solaris ) , SDK errors appeared on SDK for just Solaris OS , I refer some sites and it talk that the SDK on Solaris OS was build based on Sun SDK is it write ? so please I need to now if the IBM build the Solaris SDK from scratch or based on sun SDK ?? More Details : I Installed the same IBM WAS Application Server on two servers as the following : 1- Server1 - OS (AIX) 2- Server2 - OS ( Solaris) these two server on the same network and have the same configuration . Then I deploy Java Application ( X ) on both servers , the Application X was run on Server1 ( AIX ) without any problem but when I run the Application on Server 2 ( Solaris OS) I faced SDK issue . So I need to know what the difference between AIX WAS SDK and Solaris WAS SDK ?? Note : I try windows and it was run without any problem .

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Copy/move EBS volume from one Region to another

    - by Gnanam
    Background of our setup: We've hosted our web-based application in Amazon EC2 US East (Virginia) Region. Our instance is based on Linux distribution (CentOS) and AMI is S3-backed. 1 EBS volume (400 GB size) is attached to this instance. Question: We've planned to migrate our deployment to US West (N. California) Region. From AWS doc, I understood that for moving AMI, there is a command-line tool available - ec2-migrate-bundle. But for moving EBS volume across Region, currently there is no tool available. I'm looking for easiest and/or fastest way of copying/moving EBS volume from one Region to another. Also, are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • Oracle OpenWorld Session: “Business Driven Development with BPM: Lessons from the Real World”

    - by Ajay Khanna
    One of key values that BPM promises is “Business Empowerment”. People closest to the processes, who participate in the process every day, are the ones who know most about the process. These are the people who run day-to-day operations, people who triage customer issues, people who envision improvements and innovations. It is, therefore, imperative that when a company decides to use BPM technology to automate their business processes, business people take the driver’s seat. BPM is not an IT only project. Oracle BPM suite has been designed keeping this core tenet of BPM, Business Empowerment, in mind. The result is business user centered design of Process Composer. Process Composer is designed to let business users document their processes, analyze them using simulation, create web forms, specify business rules and even run them in testing mode using process player, to see if the designed process meets their needs. This does not mean that IT has no role in this process. In fact, Oracle BPM Suite has made it very easy for Business and IT to collaborate. The same process can be shared among business, and IT stakeholders and each can collaborate to create model-driven, process based executable applications. A process may need to integrate with multiple systems via various mechanisms, and IT leads system and data integration effort. IT helps fine tune the performance of process applications and ensures that the deployment of process application meets scalability and failover standards. In this session, we saw Harish Gaur and Satya Narayanan from Oracle demonstrate roles Business and IT play in BPM projects and how Oracle BPM Suite enables business and IT collaboration to design and automate process based applications. They also discussed real life customer stories. Some key takeaways from this session: There are no IT projects, only business initiatives, requiring IT support Identify high impact processes – critical, better BPM ROI Identify key metrics to measure process performance Align business with IT layer

    Read the article

  • What are the requirements to test a website using jquery.get() ? [migrated]

    - by Frankie
    I am working on a simple website. It has to search quite a few text files in different sub-folders. The rest of the page uses jquery, so I would like to use it for this also. The function I am looking at is .get() for downloading the files. So my main question is, can I test this on my local computer (Ubuntu Linux) or do I have to have it uploaded to a server? Also, if there's a better way to go about this, that would be nice to know. However, I'm more worried about getting it working. Thanks, Frankie PS: Heres the JS/jQuery code for downloading the files to an array. g_lists = new Array(); $(":checkbox").each(function(i){ if ($(this).attr("name") != "0") { var path = "../" + $(this).attr("name") + ".txt"; $("#bot").append("<br />" + path); // debug $.get(path, function(data){ g_lists[i] = data; $("#bot").html(data); }); } else { g_lists[i] = ""; } }); Edit: Just a note about the path variable. I think it's correct, but I'm not 100% sure. I'm new to web development. Here's some examples it produces and the directory tree of the site. Maybe it will help, can't hurt. . +-- include ¦   +-- jquery.js ¦   +-- load.js +-- index.xhtml +-- style.css +-- txt    +-- Scripting_Tools    +-- Editors.txt    +-- Other.txt Examples of path: ../txt/Scripting_Tools/Editors.txt ../txt/Scripting_Tools/Other.txt Well I'm a new user, so I can't "answer" my own question, so I'll just post it here: After asking for help on a IRC chat channel specific to jQuery, I was told I could use this on a local host. To do this I installed Apache web server, and copied my site into it's directory. More information on setting it up can be found here: http://www.howtoforge.com/ubuntu_debian_lamp_server Then to run the site I navigated my browser to "localhost" and everything works.

    Read the article

  • Microeconomical simulation: coordination/planning between self-interested trading agents

    - by Milton Manfried
    In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a "game" modeling economic activity, where the possible "moves" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by "perfectly rational" agents. A great example of what I do not want is the trading algorithm in SugarScape -- paraphrasing from Growing Artificial Societies p101-102: when a pair of agents interact to trade, they each compute their internal valuations of the goods, then a bargaining process is conducted and a price is agreed to. If this price makes both agents better off, they complete the transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called "strategic thinking" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be "zero sum", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents.

    Read the article

  • Frame Independent Movement

    - by ShrimpCrackers
    I've read two other threads here on movement: Time based movement Vs Frame rate based movement?, and Fixed time step vs Variable time step but I think I'm lacking a basic understanding of frame independent movement because I don't understand what either of those threads are talking about. I'm following along with lazyfoo's SDL tutorials and came upon the frame independent lesson. http://lazyfoo.net/SDL_tutorials/lesson32/index.php I'm not sure what the movement part of the code is trying to say but I think it's this (please correct me if I'm wrong): In order to have frame independent movement, we need to find out how far an object (ex. sprite) moves within a certain time frame, for example 1 second. If the dot moves at 200 pixels per second, then I need to calculate how much it moves within that second by multiplying 200 pps by 1/1000 of a second. Is that right? The lesson says: "velocity in pixels per second * time since last frame in seconds. So if the program runs at 200 frames per second: 200 pps * 1/200 seconds = 1 pixel" But...I thought we were multiplying 200 pps by 1/1000th of a second. What is this business with frames per second? I'd appreciate if someone could give me a little bit more detailed explanation as to how frame independent movement works. Thank you.

    Read the article

  • Running Multiple sites with multiple domains apache

    - by PsychoData
    I am having a rough time running apache and using multiple domain names here is a snippet of my config file. I keep getting a error saying that NameVirtualHost has no VirtualHosts. I want them both running on the same IP and I'm not sure why this doesn't work. I've been digging through the documentation for VirtualHosts, NameVirtualHost, and apache's page about name based virtual hosting. That example in the name based page is almost exactly my config! What am I doing wrong? Listen *:80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.sample1.net DocumentRoot /var/www/sample1-net </VirtualHost> <VirtualHost *:80> ServerName www.example2.net DocumentRoot /var/www/example2-net </VirtualHost>

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Horse Drawn Fiber Optics Bring Broadband to Remote Areas

    - by Jason Fitzpatrick
    When you think of fiber optics and high speed internet the last thing you likely think of is… horses. Yet horses have been put to use rolling out fiber optics to remote rural locations. In Vermont a Belgium draft horse named Fred, seen in the photo above being tended by his handler Claude, is a distinctly 19th century solution to a 21st century problem; how to run fiber optic cable through remote areas where trucks cannot easily pass. The man and animal are indispensable to cable and phone-service provider FairPoint Communications because they easily can access hard-to-reach job sites along country roads, which bulky utility trucks often cannot. “It just saves so much work – it would take probably 15 guys to do what Fred and Claude can do,” said Paul Clancy, foreman of a line crew from FairPoint. “They can pull 5,000 feet of cable with no sweat.” You can read more about the use of draft horses to draw lines and the roll out of broadband to rural Vermont at the link below. Vermont Uses Draft Horse to Lay Cables for Internet Access [Reuters] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >