Search Results

Search found 17771 results on 711 pages for 'dhcp option'.

Page 363/711 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • How to filter a duration in SQL Profiler

    - by user341460
    Hi Friends, I need to run a profiler on SQL 2005 to capture the SPs with which took longer than 1/10th of a second. Can you please let me know how can I do that. I dont see the option. Also in the duration is that measured in second or minute. I would apprecaite your help. Thanks,

    Read the article

  • Adding to an Array

    - by j-t-s
    Hi All I have an array: String[] ay = { "blah", "blah number 2" "etc" }; ... But now I want to add to this array at a later time, but I see no option to do so. How can this be done? I keep getting a message saying that the String cannot be converted to String[]. Thank you

    Read the article

  • calender Extender problem

    - by picnic4u
    In Asp.net project the Calendar Extender function is working well in IE but not In Mozzila,Crome etc. that property is, when we click on top of the calendar then it open the more option for the year and month selection.

    Read the article

  • How to Stop the Window Animation in Win XP SP3 Permanently??

    - by epale
    Hi everyone, May I know how I can get rid of the Window Animation (seen when you minimise or maximise a window) in Win XP SP3 Permanently?? I have tried using windows powertoys tweakUI as well as going to control panel---adjust visual effects--- then unchecking the "Animate windows when maximising and minimising" option. Problem is that the window animation will disappear at first but returns again some time later. Thank you very much

    Read the article

  • How to create many div's with 100% height?

    - by ChrisBenyamin
    I need a html document, that contains multiple div's with 100% height (screen filling) one below the other. I have tried to apply every element a height of 100%, but that won't work seamless nor clean. Maybe there is a option with JavaScript? I don't have an idea. Please suggest me your solutions. chris

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Solaris 11 pkg fix is my new friend

    - by user12611829
    While putting together some examples of the Solaris 11 Automated Installer (AI), I managed to really mess up my system, to the point where AI was completely unusable. This was my fault as a combination of unfortunate incidents left some remnants that were causing problems, so I tried to clean things up. Unsuccessfully. Perhaps that was a bad idea (OK, it was a terrible idea), but this is Solaris 11 and there are a few more tricks in the sysadmin toolbox. Here's what I did. # rm -rf /install/* # rm -rf /var/ai # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 So far so good. Then comes an oops..... setup-service[168]: cd: /var/ai//service/.conf-templ: [No such file or directory] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is where you generally say a few things to yourself, and then promise to quit deleting configuration files and directories when you don't know what you are doing. Then you recall that the new Solaris 11 packaging system has some ability to correct common mistakes (like the one I just made). Let's give it a try. # pkg fix installadm Verifying: pkg://solaris/install/installadm ERROR dir: var/ai Group: 'root (0)' should be 'sys (3)' dir: var/ai/ai-webserver Missing: directory does not exist dir: var/ai/ai-webserver/compatibility-configuration Missing: directory does not exist dir: var/ai/ai-webserver/conf.d Missing: directory does not exist dir: var/ai/image-server Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/cgi-bin Missing: directory does not exist dir: var/ai/image-server/images Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/logs Missing: directory does not exist dir: var/ai/profile Missing: directory does not exist dir: var/ai/service Group: 'root (0)' should be 'sys (3)' dir: var/ai/service/.conf-templ Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_data Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_files Missing: directory does not exist file: var/ai/ai-webserver/ai-httpd-templ.conf Missing: regular file does not exist file: var/ai/service/.conf-templ/AI.db Missing: regular file does not exist file: var/ai/image-server/cgi-bin/cgi_get_manifest.py Missing: regular file does not exist Created ZFS snapshot: 2012-12-11-21:09:53 Repairing: pkg://solaris/install/installadm Creating Plan (Evaluating mediators): | DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 3/3 0.0/0.0 0B/s PHASE ITEMS Updating modified actions 16/16 Updating image state Done Creating fast lookup database Done In just a few moments, IPS found the missing files and incorrect ownerships/permissions. Instead of reinstalling the system, or falling back to an earlier Live Upgrade boot environment, I was able to create my AI services and now all is well. # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 Refreshing install services Warning: mDNS registry of service solaris11-x86 could not be verified. Creating default-i386 alias Setting the default PXE bootfile(s) in the local DHCP configuration to: bios clients (arch 00:00): default-i386/boot/grub/pxegrub Refreshing install services Warning: mDNS registry of service default-i386 could not be verified. # installadm create-service -n solaris11u1-x86 --imagepath /install/solaris11u1-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 514/514 292.3/292.3 0B/s PHASE ITEMS Installing new actions 661/661 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11u1-x86 Image path: /install/solaris11u1-x86 Refreshing install services Warning: mDNS registry of service solaris11u1-x86 could not be verified. # installadm list Service Name Alias Of Status Arch Image Path ------------ -------- ------ ---- ---------- default-i386 solaris11-x86 on i386 /install/solaris11-x86 solaris11-x86 - on i386 /install/solaris11-x86 solaris11u1-x86 - on i386 /install/solaris11u1-x86 This is way way better than pkgchk -f in Solaris 10. I'm really beginning to like this new IPS packaging system.

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

  • Dealing with HTTP w00tw00t attacks

    - by Saif Bechan
    I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this: My apache version is apache v2.2.3 and I use mod_security2.c This were the entries from the error log: [Wed Mar 24 02:35:41 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:31 2010] [error] [client 202.75.211.90] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:47:49 2010] [error] [client 95.228.153.177] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Wed Mar 24 02:48:03 2010] [error] [client 88.191.109.38] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) Here are the errors from the access_log: 202.75.211.90 - - [29/Mar/2010:10:43:15 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:11:40:41 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 211.155.228.169 - - [29/Mar/2010:12:37:19 +0200] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" I tried configuring mod_security2 like this: SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this: SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind" SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:" SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)" Even this does not work. I don't know what to do anymore. Anyone have any advice? Update 1 I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day. I came up with 2 other solutions, can someone comment on them on being good or not. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form. Update 2 After going trough the answers I came to the following conclusions. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this. Final thoughts on the subject The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries. It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large. Preventing this from happening in any way will cost you resources and they it is a good practice not to waste your resources on unimportant stuff. The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant. Thank you all for the help, and I hope this post can be helpful to someone else too.

    Read the article

  • Spam Activity From my computer

    - by Bnymn
    I'm using Ubuntu 12.04 64bit. I'm using HTTP proxy over ssh as mentioned here. If I do not start TinyProxy, everything is OK. But, when I start TinyProxy, I'm getting the following. I think there is an application running on my machine and watching the proxy to start. But I could not decide which one it could be. ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:01 /sbin/init 2 ? S 0:00 [kthreadd] 3 ? S 0:00 [ksoftirqd/0] 6 ? S 0:00 [migration/0] 7 ? S 0:00 [watchdog/0] 21 ? S< 0:00 [cpuset] 22 ? S< 0:00 [khelper] 23 ? S 0:00 [kdevtmpfs] 24 ? S< 0:00 [netns] 26 ? S 0:00 [sync_supers] 27 ? S 0:00 [bdi-default] 28 ? S< 0:00 [kintegrityd] 29 ? S< 0:00 [kblockd] 30 ? S< 0:00 [ata_sff] 31 ? S 0:00 [khubd] 32 ? S< 0:00 [md] 34 ? S 0:00 [khungtaskd] 35 ? S 0:00 [kswapd0] 36 ? SN 0:00 [ksmd] 37 ? SN 0:00 [khugepaged] 38 ? S 0:00 [fsnotify_mark] 39 ? S 0:00 [ecryptfs-kthrea] 40 ? S< 0:00 [crypto] 48 ? S< 0:00 [kthrotld] 49 ? S 0:00 [scsi_eh_0] 50 ? S 0:00 [scsi_eh_1] 51 ? S 0:00 [scsi_eh_2] 52 ? S 0:00 [scsi_eh_3] 75 ? S< 0:00 [devfreq_wq] 240 ? S< 0:00 [xfs_mru_cache] 241 ? S< 0:00 [xfslogd] 242 ? S< 0:00 [xfsdatad] 243 ? S< 0:00 [xfsconvertd] 245 ? S 0:00 [xfsbufd/sda3] 246 ? S 0:01 [xfsaild/sda3] 330 ? S 0:00 upstart-udev-bridge --daemon 333 ? Ss 0:00 /sbin/udevd --daemon 472 ? S< 0:00 [cfg80211] 479 ? S< 0:00 [kpsmoused] 671 ? S 0:00 upstart-socket-bridge --daemon 779 ? S 0:00 [xfsbufd/sda4] 781 ? S 0:01 [xfsaild/sda4] 785 ? S< 0:00 [ttm_swap] 800 ? S< 0:00 [hd-audio0] 803 ? S< 0:00 [hd-audio1] 857 ? Sl 0:00 rsyslogd -c5 869 ? Ss 0:04 dbus-daemon --system --fork --activation=upstart 881 ? Ss 0:00 /usr/sbin/modem-manager 883 ? Ss 0:00 /usr/sbin/bluetoothd 905 ? Ssl 0:02 NetworkManager 906 ? Ss 0:00 /usr/sbin/cupsd -F 910 ? Sl 0:02 /usr/lib/policykit-1/polkitd --no-debug 918 ? S 0:00 avahi-daemon: running [bunyamin-hp.local] 919 ? S 0:00 avahi-daemon: chroot helper 920 ? S< 0:00 [krfcommd] 956 ? Ss 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /var/run/wpa_supplicant 980 tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4 985 tty5 Ss+ 0:00 /sbin/getty -8 38400 tty5 1000 tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2 1006 tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3 1009 tty6 Ss+ 0:00 /sbin/getty -8 38400 tty6 1024 ? Ss 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket 1025 ? Ss 0:00 atd 1026 ? Ss 0:00 cron 1029 ? Ss 0:01 /usr/sbin/irqbalance 1034 ? Ssl 0:00 whoopsie 1091 ? Ssl 0:00 lightdm 1216 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1 1224 ? Sl 0:00 /usr/lib/accountsservice/accounts-daemon 1241 ? Sl 0:00 /usr/sbin/console-kit-daemon --no-daemon 1356 ? Sl 0:00 /usr/lib/upower/upowerd 1447 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/colord/colord 1539 ? SNl 0:00 /usr/lib/rtkit/rtkit-daemon 1723 ? Sl 0:00 /usr/lib/udisks/udisks-daemon 1724 ? S 0:00 udisks-daemon: not polling any devices 2077 ? Z 0:00 [lightdm] <defunct> 2433 ? Z 0:00 [lightdm] <defunct> 3491 ? S 0:00 [flush-8:0] 4023 ? S 0:00 [kworker/u:14] 4034 ? S 0:00 [migration/1] 4035 ? S 0:00 [kworker/1:3] 4036 ? S 0:00 [ksoftirqd/1] 4037 ? S 0:00 [watchdog/1] 4038 ? S 0:00 [migration/2] 4040 ? S 0:00 [ksoftirqd/2] 4041 ? S 0:00 [watchdog/2] 4042 ? S 0:00 [migration/3] 4043 ? S 0:00 [kworker/3:1] 4044 ? S 0:00 [ksoftirqd/3] 4045 ? S 0:00 [watchdog/3] 4047 ? S 0:00 [irq/43-mei] 4070 ? S 0:00 [kworker/3:0] 4072 ? S 0:00 [kworker/1:0] 4164 ? Ss 0:00 anacron -s 4549 tty7 Ss+ 1:13 /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 4683 ? Sl 0:00 lightdm --session-child 12 47 4718 ? Sl 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 4729 ? Ssl 0:00 gnome-session --session=gnome-fallback 4765 ? Ss 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session --session=gnome-fallback 4768 ? S 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session --session=gnome-fallback 4769 ? Ss 0:00 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 4779 ? Sl 0:01 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 4786 ? S 0:00 /usr/lib/gvfs/gvfsd 4788 ? Sl 0:00 /usr/lib/gvfs//gvfs-fuse-daemon -f /home/bunyamin/.gvfs 4797 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gsd-printer 4799 ? Sl 0:03 metacity 4805 ? S 0:00 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2 4811 ? Sl 0:10 gnome-panel 4814 ? S 0:00 syndaemon -i 2.0 -K -R -t 4819 ? S<l 0:00 /usr/bin/pulseaudio --start --log-target=syslog 4821 ? Sl 0:00 /usr/lib/dconf/dconf-service 4826 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-helper 4828 ? Sl 0:06 nautilus -n 4830 ? Sl 0:02 nm-applet 4832 ? Sl 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 4835 ? Sl 0:00 bluetooth-applet 4851 ? S 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 4854 ? Sl 0:04 /usr/lib/indicator-applet/indicator-applet-complete 4859 ? S 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor 4863 ? S 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 4865 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 4871 ? S 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 4874 ? Sl 0:00 /usr/lib/indicator-application/indicator-application-service 4876 ? Sl 0:00 /usr/lib/indicator-datetime/indicator-datetime-service 4878 ? Sl 0:00 /usr/lib/indicator-messages/indicator-messages-service 4887 ? Sl 0:00 /usr/lib/indicator-printers/indicator-printers-service 4888 ? Sl 0:00 /usr/lib/indicator-session/indicator-session-service 4889 ? Sl 0:00 /usr/lib/indicator-sound/indicator-sound-service 4906 ? S 0:00 /usr/lib/geoclue/geoclue-master 4929 ? S 0:00 /usr/lib/ubuntu-geoip/ubuntu-geoip-provider 4938 ? Sl 0:11 /usr/lib/gnome-applets/multiload-applet-2 4939 ? Sl 0:01 /usr/lib/gnome-applets/cpufreq-applet 4953 ? S 0:00 /usr/lib/gvfs/gvfsd-metadata 4955 ? S 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 4957 ? Sl 3:22 /usr/lib/firefox/firefox 4973 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/at-spi2-core/at-spi-bus-launcher 4997 ? Sl 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 5000 ? Sl 0:00 telepathy-indicator 5007 ? Sl 0:00 /usr/lib/telepathy/mission-control-5 5012 ? Sl 0:00 /usr/lib/gnome-online-accounts/goa-daemon 5018 ? Sl 0:00 gnome-screensaver 5019 ? Sl 0:01 zeitgeist-datahub 5025 ? Sl 0:00 /usr/bin/zeitgeist-daemon 5033 ? Sl 0:00 /usr/lib/zeitgeist/zeitgeist-fts 5041 ? S 0:00 /bin/cat 5052 ? Sl 0:08 /usr/bin/gnome-terminal -x /bin/sh -c '/home/bunyamin/Desktop/SSH Tunnel' 5058 ? S 0:00 gnome-pty-helper 5067 ? Sl 0:00 update-notifier 5090 ? S 0:00 /usr/bin/python /usr/lib/system-service/system-service-d 5130 ? Sl 0:00 /usr/lib/deja-dup/deja-dup/deja-dup-monitor 5135 ? S 0:00 /bin/sh -c nice run-parts --report /etc/cron.daily 5136 ? SN 0:00 run-parts --report /etc/cron.daily 5358 pts/4 Ss 0:00 bash 5482 ? S 0:00 [kworker/0:1] 5487 ? S 0:01 [kworker/2:0] 5550 ? Sl 1:15 /usr/lib/firefox/plugin-container /usr/lib/flashplugin-installer/libflashplayer.so -greomni /usr/lib/firefox/omni.ja 4957 true plugin 5717 ? S 0:00 /usr/lib/cups/notifier/dbus dbus:// 5824 ? SN 0:00 /bin/sh /etc/cron.daily/update-notifier-common 5825 ? SN 0:00 /usr/bin/python /usr/lib/update-notifier/package-data-downloader 5872 ? Sl 0:00 /usr/lib/notify-osd/notify-osd 5888 ? S 0:00 /sbin/udevd --daemon 5889 ? S 0:00 /sbin/udevd --daemon 5909 ? S 0:00 /sbin/dhclient -d -4 -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/sendsigs.omit.d/network-manager.dhclient-eth1.pid -lf /var/lib/dhcp/dhclient-f5f0 5912 ? S 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/var/run/sendsigs.omit.d/network-manager.dnsmasq.pid --listen-address=127. 5975 pts/1 Ss+ 0:00 /bin/sh -c '/home/bunyamin/Desktop/SSH Tunnel' 5976 pts/1 S+ 0:00 /bin/sh /home/bunyamin/Desktop/SSH Tunnel 5977 pts/1 S+ 0:00 ssh -p443 [email protected] -L 8000:127.0.0.1:8000 5980 ? Sl 0:00 /usr/lib/gvfs/gvfsd-http --spawner :1.6 /org/gtk/gvfs/exec_spaw/2 6034 ? S 0:00 [kworker/u:0] 6054 ? S 0:00 [kworker/2:2] 6070 ? S 0:00 [kworker/0:3] 6094 ? Sl 0:02 gedit /home/bunyamin/Desktop/a.html 6101 ? S 0:00 [kworker/0:2] 6130 pts/4 R+ 0:00 ps ax TinyProxy LOG connect to ad.adserverplus.com:80 mx1.u4gf.com - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=160x600&s=2959021&T=3&_salt=1516586745&B=12&m=2&u=http%3A%2F%2Fsunshinefelling.com%2Findex.php%3Fview%3Darticle%26catid%3D45%253Aplus-size-dresses%26id%3D7512%253A2012-01-25-22-42-00%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D101&r=1 HTTP/1.0" - - bye bye bye connect to ad.adserverplus.com:80 connect to ad.bharatstudent.com:80 connect to ad.yieldmanager.com:80 142.91.199.250.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=0x0&y=29&s=2913320&_salt=2228719469&B=12&m=2&r=1 HTTP/1.0" - - 173.208.94.117 - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=0x0&y=29&s=3187816&_salt=462045326&B=12&m=2&r=1 HTTP/1.0" - - mx1.a54m.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=300x250&s=2887338&T=3&_salt=2925281520&B=12&m=2&u=http%3A%2F%2Fsecretskirt.com%2Findex.php%3Foption%3Dcom_contact%26view%3Dcontact%26id%3D1%26Itemid%3D95&r=1 HTTP/1.0" - - 108.62.75.54.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=300x250&s=3218437&T=3&_salt=2939054384&B=12&m=2&u=http%3A%2F%2Fwww.vifinances.com%2Ffinance-investing%2Finsurance-investment%2Fis-life-insurance-investment-necessarily-the-way-to-go.html&r=1 HTTP/1.0" - - connect to ad.yieldmanager.com:80 connect to ad.globe7.com:80 bye connect to ad.globe7.com:80 connect to ad.globe7.com:80 bye 173.208.94.22 - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=2922824&T=3&_salt=705371051&B=12&m=2&u=%3A%2F%2Fsunshinefelling.com%2Findex.php%3Fview%3Darticle%26catid%3D44%3Amature-womens-fashion%26id%3D6917%3A2012-01-25-22-37-27%26tmpl%3Dcomponent%26print%3D1%26layout%3Ddefault%26page%3D&r=1 HTTP/1.0" - - bye 23.19.10.44.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=160x600&section=3512129&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.yieldmanager.com:80 bye 142.91.189.27.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/imp?Z=0x0&y=29&s=3660215&_salt=2921537966&B=12&m=2&r=1 HTTP/1.0" - - connect to ad.scanmedios.com:80 bye 142.91.217.158.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globaltakeoff.net/st?ad_type=iframe&ad_size=160x600&section=2077929&pub_url=${PUB_URL} HTTP/1.0" - - 23.19.76.194.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=3127996&T=3&_salt=1952612979&B=12&m=2&u=http%3A%2F%2Fwww.oseey.com%2Fpure-core-watch%2Fcarbon-fiber-watch%2Fcarbon-monoxide-poisoning-awareness.html&r=1 HTTP/1.0" - - mx1.e6sb.com - - [17/Oct/2012 07:38:53] "GET http://ad.scanmedios.com/imp?Z=728x90&s=3522638&T=3&_salt=3444993091&B=12&m=2&u=http%3A%2F%2Fsunshinefelling.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D6013%3A2012-01-25-22-25-54%26catid%3D40%3Abig-beautiful-women-fashion%26Itemid%3D96&r=1 HTTP/1.0" - - connect to ad.tagjunction.com:80 connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 23.19.76.154.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=2569393 HTTP/1.0" - - connect to ads.creafi-online-media.com:80 bye 108.62.109.115.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3315330&_salt=2385926515&B=12&m=2&r=1 HTTP/1.0" - - 142.91.217.214.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3634166&T=3&_salt=1590442300&B=12&m=2&u=http%3A%2F%2Fwealthterritory.com%2Findex.php%3Foption%3Dcom_mailto%26tmpl%3Dcomponent%26link%3DaHR0cDovL3dlYWx0aHRlcnJpdG9yeS5jb20vaW5kZXgucGhwP29wdGlvbj1jb21fY29udGVudCZ2aWV3PWFydGljbGUmaWQ9NDY2NDoyMDExLTA3LTA2LTEzLTI2LTUwJmNhdGlkPTQxOnNlcnZpY2VzJkl0ZW1pZ&r=1 HTTP/1.0" - - 108.62.185.184.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ads.creafi-online-media.com/imp?Z=728x90&s=2885766&T=3&_salt=107120374&B=12&m=2&u=http%3A%2F%2Feconomicccore.com%2Findex.php%3Foption%3Dcom_content%26view%3Dcategory%26layout%3Dblog%26id%3D48%26Itemid%3D98%26limitstart%3D45&r=1 HTTP/1.0" - - bye bye bye connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 connect to ad.tagjunction.com:80 bye 108.62.75.252.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=728x90&section=3213387&pub_url=${PUB_URL} HTTP/1.0" - - bye connect to ad.tagjunction.com:80 bye connect to ad.yieldmanager.com:80 173.208.94.29 - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/st?ad_type=iframe&ad_size=728x90&section=3006024&pub_url=${PUB_URL} HTTP/1.0" - - 23.19.31.84.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=2586703&_salt=2905995697&B=12&m=2&r=1 HTTP/1.0" - - oxx-ef-Words.ipwagon.net - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=0x0&y=29&s=3630499&_salt=4037530564&B=12&m=2&r=1 HTTP/1.0" - - 142.91.185.53.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=0x0&y=29&s=3512541&_salt=1134875077&B=12&m=2&r=1 HTTP/1.0" - - connect to ad.globe7.com:80 108.177.187.37.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=300x250&s=3168350&T=3&_salt=548860046&B=12&m=2&u=http%3A%2F%2Flifehealthyliving.com%2Findex.php%3Fview%3Darticle%26catid%3D34%253Ahealthy-food%26id%3D4681%253A2012-05-16-20-40-19%26tmpl%3Dcomponent%26print%3D1%26layout%3Ddefault%26page%3D%26option%3Dcom_content%26Itemid%3D53&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 bye connect to ads.creafi-online-media.com:80 108.177.223.180.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=300x250&s=3331290&T=3&_salt=1270334669&B=12&m=2&u=http%3A%2F%2Fwww.vegls.com%2Faccident-attorneys-firms%2Fauto-accident-attorney%2Ffind-the-correct-auto-accident-attorney.html&r=1 HTTP/1.0" - - bye 142.91.185.38.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=160x600&section=818253 HTTP/1.0" - - connect to ad.yieldmanager.com:80 bye bye bye 108.62.75.230.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ads.creafi-online-media.com/st?ad_type=pop&ad_size=0x0&section=3323456&banned_pop_types=29&pop_times=1&pop_frequency=86400&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.adserverplus.com:80 bye connect to ad.adserverplus.com:80 bye 142.91.217.194.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=300x250&s=3068801&T=3&_salt=1246107431&B=12&m=2&u=http%3A%2F%2Fmoodoffashionandbeauty.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D756%3A2011-07-13-13-13-43%26catid%3D36%3Afashion-clothes%26Itemid%3D55&r=1 HTTP/1.0" - - connect to ad.smxchange.com:80 108.62.185.235.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=3307618&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.globe7.com:80 bye connect to ad.yieldmanager.com:80 bye bye connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 108.177.168.183.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/imp?Z=300x250&s=3582877&T=3&_salt=3271923155&B=12&m=2&u=http%3A%2F%2Fwomenhealthroad.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D5780%3A2011-12-12-16-56-53%26catid%3D40%3Ahealth-issues%26Itemid%3D96&r=1 HTTP/1.0" - - 23.19.3.100.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=2895969&T=3&_salt=207805714&B=12&m=2&u=http%3A%2F%2Feconomicccore.com%2Findex.php%3Fview%3Darticle%26catid%3D46%253Aeconomic-news%26id%3D6079%253A2011-09-29-07-39-13%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D96&r=1 HTTP/1.0" - - bye 142.91.199.212.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=2956039&pub_url=${PUB_URL} HTTP/1.0" - - bye 142.91.189.169.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=3004691&T=3&_salt=2747591679&B=12&m=2&u=http%3A%2F%2Fwww.qtsfinancial.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D5406%3Afinancial-statement-english-page%26catid%3D43%3Afinancial-analysis%26Itemid%3D99&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 23.19.31.58.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3323560&_salt=3172064457&B=12&m=2&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 iei-ix-Words.ipwagon.net - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=728x90&s=3187813&T=3&_salt=1110944041&B=12&m=2&u=http%3A%2F%2Fwww.workinhouses.com%2Fhtml%2Fwallingford-ct-connecticuts-best-places-for-your-home.html&r=1 HTTP/1.0" - - connect to cookex.amp.yahoo.com:80 173.208.94.116 - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=3213592&pub_url=${PUB_URL} HTTP/1.0" - - bye bye connect to ad.yieldmanager.com:80 connect to ads.creafi-online-media.com:80 bye 108.62.75.99.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=160x600&s=2913321&T=3&_salt=333033369&B=12&m=2&u=http%3A%2F%2Ffashionstreetlight.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D28850%3A2011-12-20-12-59-39%26catid%3D45%3Afashion-accessories%26Itemid%3D101&r=1 HTTP/1.0" - - bye 142.91.217.208.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://cookex.amp.yahoo.com/v2/cexposer/SIG=18kthu27g/*http%3A//ad.yieldmanager.com/imp?Z=300x250&s=2682517&T=3&_salt=1378331643&B=12&m=2&u=http%3A%2F%2Fwww.economicwindows.com%2Findex.php%3Fview%3Darticle%26catid%3D40%253Afinancial-info%26id%3D3854%253A2011-07-06-13-25-37%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D96&r=1 HTTP/1.0" - - bye bye bye 108.62.185.228.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3315448&_salt=4241487555&B=12&m=2&r=1 HTTP/1.0" - - 108.62.185.220.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ads.creafi-online-media.com/st?ad_type=iframe&ad_size=728x90&section=3269968 HTTP/1.0" - - connect to ad.tagjunction.com:80 bye connect to ad.globe7.com:80 bye 142.91.185.47.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/st?ad_type=pop&ad_size=0x0&section=2958317&banned_pop_types=29&pop_times=1&pop_frequency=0&pub_url=${PUB_URL} HTTP/1.0" - - bye 108.177.168.183.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/imp?Z=160x600&s=3582877&T=3&_salt=1313872999&B=12&m=2&u=http%3A%2F%2Fwomenhealthroad.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D5753%3A2011-12-12-16-56-46%26catid%3D40%3Ahealth-issues%26Itemid%3D96&r=1 HTTP/1.0" - - connect to ad.tagjunction.com:80 bye connect to ad.globe7.com:80 bye connect to ad.adserverplus.com:80 108.62.75.53.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=300x250&s=3127172&T=3&_salt=2152278771&B=12&m=2&u=http%3A%2F%2Fwww.oslims.com%2Ffashion-coffee%2Ffashion-slimming-coffee%2Fso-whats-your-poison-coffee-or-tea.html&r=1 HTTP/1.0" - - connect to ad.yieldmanager.com:80 bye bye 108.62.75.170.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=0x0&y=29&s=2909210&_salt=1773835502&B=12&m=2&r=1 HTTP/1.0" - - 23.19.79.3.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=728x90&section=3571505&pub_url=${PUB_URL} HTTP/1.0" - - 142.91.217.216.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3630472&T=3&_salt=462936220&B=12&m=2&u=http%3A%2F%2Fwww.economicwindows.com%2Findex.php%3Fview%3Darticle%26catid%3D41%253Afinancial-services%26id%3D4854%253A2011-07-06-13-26-56%26tmpl%3Dcomponent%26print%3D1%26layout%3Ddefault%26page%3D%26option%3Dcom_content%26Itemid%3D97&r=1 HTTP/1.0" - - connect to ad.yieldmanager.com:80 connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 142.91.189.176.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3187822&T=3&_salt=325267799&B=12&m=2&u=http%3A%2F%2Feconomysea.com%2Findex.php%3Foption%3Dcom_mailto%26tmpl%3Dcomponent%26link%3DaHR0cDovL2Vjb25vbXlzZWEuY29tL2luZGV4LnBocD9vcHRpb249Y29tX2NvbnRlbnQmdmlldz1hcnRpY2xlJmlkPTYzNDk6MjAxMS0wOS0yOC0yMC0wNC0xOSZjYXRpZD00NzplY29ub21pYy1uZXdzJkl0ZW1pZD05Nw&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 142.91.190.240.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=2956040&T=3&_salt=3354730349&B=12&m=2&u=http%3A%2F%2Fdomarketings.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D279%3AWhy-Contractor-Leads-Are-Best-For-Getting-Ideal-Construction-Prospects%26catid%3D2%3Abusiness&r=1 HTTP/1.0" - - bye 108.62.75.6.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3323456&T=3&_salt=1244915826&B=12&m=2&u=http%3A%2F%2Fdomarketings.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D989%3AThe-Basics-of-Failure-Mode-and-Effective-Analysis%26catid%3D2%3Abusiness&r=1 HTTP/1.0" - - bye 142.91.217.220.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=2921135&T=3&_salt=1337464905&B=12&m=2&u=http%3A%2F%2Financezone.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D7236%3A2011-09-05-19-56-54%26catid%3D49%3Acareer-banking%26Itemid%3D99&r=1 HTTP/1.0" - - bye connect to ad.yieldmanager.com:80 108.62.178.229.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=160x600&section=3168350&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.yieldmanager.com:80 108.177.168.187.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.smxchange.com/st?ad_type=iframe&ad_size=300x250&section=3285387&pop_nofreqcap=1&pub_url=${PUB_URL} HTTP/1.0" - - skg-wr-Words.ipwagon.net - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3153972&_salt=3512711469&B=12&m=2&r=1 HTTP/1.0" - - bye connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 mx1.u4gf.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=2959021&T=3&_salt=1516586745&B=12&m=2&u=http%3A%2F%2Fsunshinefelling.com%2Findex.php%3Fview%3Darticle%26catid%3D45%253Aplus-size-dresses%26id%3D7512%253A2012-01-25-22-42-00%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D101&r=1 HTTP/1.0" - -

    Read the article

  • Custom fail2ban Filter

    - by Michael Robinson
    In my quest to block excessive failed phpMyAdmin login attempts with fail2ban, I've created a script that logs said failed attempts to a file: /var/log/phpmyadmin_auth.log Custom log The format of the /var/log/phpmyadmin_auth.log file is: phpMyadmin login failed with username: root; ip: 192.168.1.50; url: http://somedomain.com/phpmyadmin/index.php phpMyadmin login failed with username: ; ip: 192.168.1.50; url: http://192.168.1.48/phpmyadmin/index.php Custom filter [Definition] # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; phpMyAdmin jail [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 6 The fail2ban log contains: 2012-10-04 10:52:22,756 fail2ban.server : INFO Stopping all jails 2012-10-04 10:52:23,091 fail2ban.jail : INFO Jail 'ssh-iptables' stopped 2012-10-04 10:52:23,866 fail2ban.jail : INFO Jail 'fail2ban' stopped 2012-10-04 10:52:23,994 fail2ban.jail : INFO Jail 'ssh' stopped 2012-10-04 10:52:23,994 fail2ban.server : INFO Exiting Fail2ban 2012-10-04 10:52:24,253 fail2ban.server : INFO Changed logging target to /var/log/fail2ban.log for Fail2ban v0.8.6 2012-10-04 10:52:24,253 fail2ban.jail : INFO Creating new jail 'ssh' 2012-10-04 10:52:24,253 fail2ban.jail : INFO Jail 'ssh' uses poller 2012-10-04 10:52:24,260 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,260 fail2ban.filter : INFO Set maxRetry = 6 2012-10-04 10:52:24,261 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,261 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,279 fail2ban.jail : INFO Creating new jail 'ssh-iptables' 2012-10-04 10:52:24,279 fail2ban.jail : INFO Jail 'ssh-iptables' uses poller 2012-10-04 10:52:24,279 fail2ban.filter : INFO Added logfile = /var/log/auth.log 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set maxRetry = 5 2012-10-04 10:52:24,280 fail2ban.filter : INFO Set findtime = 600 2012-10-04 10:52:24,280 fail2ban.actions: INFO Set banTime = 600 2012-10-04 10:52:24,287 fail2ban.jail : INFO Creating new jail 'fail2ban' 2012-10-04 10:52:24,287 fail2ban.jail : INFO Jail 'fail2ban' uses poller 2012-10-04 10:52:24,287 fail2ban.filter : INFO Added logfile = /var/log/fail2ban.log 2012-10-04 10:52:24,287 fail2ban.filter : INFO Set maxRetry = 3 2012-10-04 10:52:24,288 fail2ban.filter : INFO Set findtime = 604800 2012-10-04 10:52:24,288 fail2ban.actions: INFO Set banTime = 604800 2012-10-04 10:52:24,292 fail2ban.jail : INFO Jail 'ssh' started 2012-10-04 10:52:24,293 fail2ban.jail : INFO Jail 'ssh-iptables' started 2012-10-04 10:52:24,297 fail2ban.jail : INFO Jail 'fail2ban' started When I issue: sudo service fail2ban restart fail2ban emails me to say ssh has restarted, but I receive no such email about my phpmyadmin jail. Repeated failed logins to phpMyAdmin does not cause an email to be sent. Have I missed some critical setup? Is my filter's regular expression wrong? Update: added changes from default installation Starting with a clean fail2ban installation: cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local Change email address to my own, action to: action = %(action_mwl)s Append the following to jail.local [phpmyadmin] enabled = true port = http,https filter = phpmyadmin action = sendmail-whois[name=HTTP] logpath = /var/log/phpmyadmin_auth.log maxretry = 4 Add the following to /etc/fail2ban/filter.d/phpmyadmin.conf # phpmyadmin configuration file # # Author: Michael Robinson # [Definition] # Option: failregex # Notes.: regex to match the password failures messages in the logfile. The # host must be matched by a group named "host". The tag "<HOST>" can # be used for standard IP/hostname matching and is only an alias for # (?:::f{4,6}:)?(?P<host>\S+) # Values: TEXT # # Count all bans in the logfile failregex = phpMyadmin login failed with username: .*; ip: <HOST>; # Option: ignoreregex # Notes.: regex to ignore. If this regex matches, the line is ignored. # Values: TEXT # # Ignore our own bans, to keep our counts exact. # In your config, name your jail 'fail2ban', or change this line! ignoreregex = Restart fail2ban sudo service fail2ban restart PS: I like eggs

    Read the article

  • Squid refresh_pattern won't cache "Expires: ..."

    - by Marcelo Cantos
    Background I frequent the OpenGL ES documentation site at http://www.khronos.org/opengles/sdk/1.1/docs/man/. Even though the content is completely static, it seems to force a reload on every single page I visit, which is very annoying. I have a squid 3.0 proxy set up (apt-get install squid3 on Ubuntu 10.04), and I added a refresh_pattern to force the pages to cache: refresh_pattern ^http://www.khronos.org/opengles/sdk/1\.1/docs/man/ … 1440 20% 10080 … override-expire ignore-reload ignore-no-cache ignore-private ignore-no-store This is all on one line, of course. While this appears to work for the XHTML documents (e.g., glBindTexture), it fails to cache the linked content, such as the DTD, some .ent files (?) and some XSL files. The delay in fetching these extra files delays rendering of the main document, so my principal annoyance isn't fixed. The only difference I can glean with these ancillary files is that they come with an Expires: header set to the current time, whereas the XHTML document has none. But I would have expected the override-expire option to fix this. I have confirmed that documents have the same base URL. I have also truncated the pattern to varying degrees, with no effect. My questions Why does the override-expire option not seem to work? Is there a simple way to tell squid to unconditionally cache a document, no matter what it finds in the response headers? (Hopefully) relevant output cache.log Jan 01 10:33:30 1970/06/25 21:18:27| Processing Configuration File: /etc/squid3/squid.conf (depth 0) Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'override-expire' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-reload' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-cache' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-store' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-private' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| DNS Socket created at 0.0.0.0, port 37082, FD 10 Jan 01 10:33:30 1970/06/25 21:18:27| Adding nameserver 192.168.1.1 from /etc/resolv.conf Jan 01 10:33:30 1970/06/25 21:18:27| Accepting HTTP connections at 0.0.0.0, port 3128, FD 11. Jan 01 10:33:30 1970/06/25 21:18:27| Accepting ICP messages at 0.0.0.0, port 3130, FD 13. Jan 01 10:33:30 1970/06/25 21:18:27| HTCP Disabled. Jan 01 10:33:30 1970/06/25 21:18:27| Loaded Icons. Jan 01 10:33:30 1970/06/25 21:18:27| Ready to serve requests. access.log Jun 25 21:19:35 2010.710 0 192.168.1.50 TCP_MEM_HIT/200 2452 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/glBindTexture.xml - NONE/- text/xml Jun 25 21:19:36 2010.263 543 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.276 556 192.168.1.50 TCP_MISS/304 370 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.666 278 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.958 279 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.251 276 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl - NONE/- text/xml Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl - NONE/- text/xml store.log Jun 25 21:19:36 2010.263 RELEASE -1 FFFFFFFF D3056C09B42659631A65A08F97794E45 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd Jun 25 21:19:36 2010.276 RELEASE -1 FFFFFFFF 9BF7F37442FD84DD0AC0479E38329E3C 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl Jun 25 21:19:36 2010.666 RELEASE -1 FFFFFFFF 7BCFCE88EC91578C8E2589CB6310B3A1 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent Jun 25 21:19:36 2010.958 RELEASE -1 FFFFFFFF ECF1B24E437CFAA08A2785AA31A042A0 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent Jun 25 21:19:37 2010.251 RELEASE -1 FFFFFFFF 36FE3D76C80F0106E6E9F3B7DCE924FA 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF A33E5A5CCA2BFA059C0FA25163485192 304 1277462871 1221139523 1277462871 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF E2CF8854443275755915346052ACE14E 304 1277462872 1221139523 1277462872 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >