Search Results

Search found 20953 results on 839 pages for 'chart control'.

Page 553/839 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • How to manage a home-grown YUM package repo?

    - by TomOnTime
    There are plenty of websites that explain how to manage a mirror of YUM repos. I want to run a repo for my home-grown packages. Is there a good way to manage such repos? What I need to do: Manage 3 repos: unstable, testing, stable Self-service functions that let users add/remove/promote packages (promote means moving a package unstable?testing or testing-stable). ACLs that control which users/groups may add/remove/promote packages. Automatically re-sign packages as they move repo to repo (since the GPG key for "stable" should be different than "unstable") Automatically run "createrepo" to update repodata when needed. Suggestions?

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • What are the most necessary non-language specific things a programmer needs to know?

    - by Josh
    I've been thinking about this a lot lately. Right now I work at a little web company, am almost done with school, and have written an iPhone app, but I'm not sure what else I need to focus my learning energies on. I've decided I want to do software programming, so I've been actively reading everything I can get my hands on that deals with Objective-C / C++ (Cocoa, OpenGL, etc). But those are not the things I'm talking about. I know I need to "master" a language or two. What I'm talking about are the other "things". Things such as learning and using source control, design patterns, etc. What thing (or things, just one per response), would you say I should concurrently be focusing on? You can consider in your answer that I'm wanting to do the aforementioned career path, but you don't have to. I just want a nice list of things to research, and actually use in my career. Also, can someone help me with tagging this? I'm not sure what exactly would be a good tag for such a question.

    Read the article

  • We are moving an Access based corporate front-end into a Web-based App

    - by Max Vernon
    We have an enterprise application with a front end written in Microsoft Access 2003 that has evolved over the past 6 years. The back end data, and a fair amount of back-end logic is contained within several Microsoft SQL Server databases. This front end app consists of around 180 forms, and over 120,000 lines of code, and interacts with VB.Net DLLs that support various critical functions used by our sales force. The current system makes use of 3 monitors to display various information; the Access app uses COM+ to control Microsoft Outlook and Internet Explorer for various purposes. The Access front end sometimes occupies 2 screens, automatically resizing itself based on Windows API-reported screen dimensions. The app also uses a Google map to present data to our agents, and allows two-way interactivity with the map through COM+ connectivity to JavaScript contained in the Google map. At the urging of senior management, we are looking to completely rewrite this application using some web-based technology, such as ASP.Net or perhaps a LAMP stack (the thinking with the LAMP stack thing is "free" is pretty cheap). We want to move to a web-based app so we can eliminate the dependency on our physical location for hiring new sales force members. Currently, our main office is full to capacity, and we need to continue growing the company. Does anyone have any thoughts on what would be the best technology to use for a web-based app of this magnitude? Keeping in mind the app is dependent on back-end services on our existing infrastructure. The app handles financial data and personal customer data, among other things. [I've looked at Best practices for moving large MS Access application towards .Net? and read the answers, and most of the comments. Interesting reading, and has some valid points, but our C.O.O. and contracted Software Architect are pushing for a full web-based app, not a .Net Windows App]

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Odd internet packet routing

    - by NachoChip
    I want to know is there anyway to explicitly control the packet routing. I try to connect my computer in HK from San Francisco. It is extremely slow and I use tracert to see what is going on. It seems the packet get routed from HK to Europe and then to New York and then to San Francisco. In US, I am using Astound Cable. Is there any suggestion I can force the packet to not go around the world before it reach my computer? Or it is all ISP dependent?

    Read the article

  • VGA no signal on LCD monitor attached to laptop

    - by Paul
    I bought a new Asus vh242h LCD monitor for use with my lenevo T60 laptop running XP professional. Display info under control panel says "Intel(R) 945GM Express Chipset Family". I am connecting via VGA. When I connect the monitor I get "VGA no signal" and the monitor screen stays blank. I have selected the monitor as the display device on the laptop. The information on the monitor displays the correct screen resolution from the laptop, so the monitor is communicating in some way with the laptop. I've successfully tried the monitor with my Dell inspirion 1525 running Windows Vista. I've change the VGA cable to one I know works. Tried different resolutions. I cannot find and specific drivers on the internet for this monitor, so I assume it should work with Plug and Play. Does anyone know what the problem could be?

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • Collision checking problem on a Tiled map

    - by nosferat
    I'm working on a pacman styled dungeon crawler, using the free oryx sprites. I've created the map using Tiled, separating the floor, walls and treasure in three different layers. After importing the map in libGDX, it renders fine. I also added the player character, for now it just moves into one direction, the player cannot control it yet. I wanted to add collision and I was planning to do this by checking if the player's new position is on a wall tile. Therefore as you can see in the following code snippet, I get the tile type of the appropriate tile and if it is not zero (since on that layer there is nothing except the wall tile) it is a collision and the player cannot move further: final Vector2 newPos = charController.move(warrior.getX(), warrior.getY()); if(!collided(newPos)) { warrior.setPosition(newPos.x, newPos.y); warrior.flip(charController.flipX(), charController.flipY()); } [..] private boolean collided(Vector2 newPos) { int row = (int) Math.floor((newPos.x / 32)); int col = (int) Math.floor((newPos.y / 32)); int tileType = tiledMap.layers.get(1).tiles[row][col]; if (tileType == 0) { return false; } return true; } The character only moves one tile with this code: If I reduce the col value by two it two more tiles. I think the problem will be around indexing, but I'm totally confused because the zero in the coordinate system of libGDX is in the bottom left corner of the screen, and I don't know the tiles array's indexing is similair or not. The size of the map is 19x21 tiles and looks like the following (the starting position of the player is marked with blue:

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • System Restore Points

    - by Ross Lordon
    Currently I am investigating how to schedule an automatic initiation of a system restore point for all of the workstations in my office. It seems that Task Scheduler already has some nice defaults (screenshots below). Even the history for this task verifies that it is running successfully. However, when I go to Recovery in the Control Panel it only lists the System Restore Points one for every previous for only 3 weeks back even if I check the show more restore points box. Why don't the additional ones appear? Would there be a better way to implement a solution, like via group policy or a script? Is there any documentation on this online? I've been having a hard time tracking anything down except how to subvert group policy disabling this feature.

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Cannot connect to telnet server

    - by BloodPhilia
    So, I can't use telnet to connect to any server but it works fine from a different computer. It just says it can't connect. I tried the following things: Disable firewall and AV protection. (Basically, there was no security feature left online) Telnet is set to "Trusted" in my AV protection. (Kaspersky Internet Security 2011) Using Putty to telnet, but apparently Putty's connection is also inhibited. (Says it can't connect to host) Disabling the telnet client in Control Panel and then re-enabling it. (Windows 7 Ultimate) hosts file is clean. Checked for nasties using MBAM and KIS 2011 as well as going though my HijackThis logs, nothing found. I can connect to the same machines/servers through the web browser, ping, tracert, etc. Only telnet seems to be blocked. Any other thoughts?

    Read the article

  • Should this folder called Data be indexed?

    - by panny
    In the indexing options of Windows 7 there is a folder called Data which is excluded from indexing for the C:\ drive by default. Can someone confirm this, please? I was not able to locate that folder on my drive, nor include it in the search index. The difference in number of indexed files is unsatisfying: windows-7 native indexing service:377703 files on six drives; third party desktop search indexing service:698654 files on the same number of drives. Files in UA Control seem not being indexed without proper priviledges. How can this be circumvented?

    Read the article

  • Do we have enough time to build an electric car future?

    - by julien.groues
    A recent article from Greenbang has posed the question 'Do we have enough time to build an electric car future?'. The writer discusses that, although the future of transport might lie with electric cars, there is concern regarding whether we'll be able to build the market and infrastructure required to support them, before carbon and oil constraints create difficulties in powering the vehicles. Of course, the increasing use of Electric vehicles (EVs) is going to put excessive pressure on energy grids, as large volumes of electricity will need to be directed to charging points, which in turn must handle fluctuating demand at peak times. EVs are increasing in popularity as a sustainable method of transport to reduce carbon consumption, and electric utilities will have the opportunity, and the challenge, to quickly determine the best methods to fuel these vehicles and accommodate the associated increases in demand for energy. Critically, efficient software is required to provide diagnostic and predictive capabilities related to EV refuelling - for example, anticipated electricity flow will need to be addressed as the number of EVs on the road increases, and electricity will need to be directed to specific areas on-demand as vehicles attempt to recharge en-mass. But a smart grid infrastructure can meet these demands, intelligently. The implementation of a smart grid is not in the distant future, it is an achievable reality for utilities via simple installation of new software and technologies, which can be done incrementally for those facing existing legacy systems or concerned with upfront costs. The smart grid is integral to the monitoring and control of energy use as well as the future-proofing of the energy grid. A smart grid will be critical to meeting the electricity requirements of new EVs and will ensure their successful deployment by providing a reliable foundation for the data handling required to record and manage electricity distribution - from recording and assessing energy usage, to analysing data and sharing information with consumers via green billing. http://www.greenbang.com/do-we-have-enough-time-to-build-an-electric-car-future_14248.html

    Read the article

  • Creative software leftovers - ASIO error message

    - by Tony Patriarche
    I temporarily installed an old Creative Audigy 2 soundboard on my Vista x64 Home Premium computer. Bad idea! I uninstalled the board & all software visible on the control panel. Now, with one particular app. (Sibelius) I keep getting a start-up message "CTASIO Warning: Creative ASIO: there are no Creative audio products installed on the system that support ASIO". I offer this as a candidate for "Most useless message", but that's beside the point. I used a commercial registry cleaner (PC-Tools Registry Mechanic) and then edited the registry looking for "creative", "audigy" and "ASIO". After removing everything I could find, I still get the message. Any suggestions?

    Read the article

  • Ubuntu 10.10: getting appropiate monitor resolution for lcd hdtv

    - by lurscher
    I'm running Ubuntu 10.10 x86_64 version, with Nvidia 9800 GT, installed 270.41.06 Nvidia drivers following this guide. I have a LG42LH30FR LCD TV connected with the dvi link - RGB PC input I'm able to get 1024x768 resolution without overscan (I can get 1080i = 1366x768 but there is a lot of hidden screen space to the right and I don't know what to do about it). I want to get full HD I can get full HD (1080p = 1920x1080) on Windows XP 64-bit with custom resolution created with Nvidia Control Panel, from reading over xorg.conf configurations it seems I need to add a certain modeling to the monitor configuration, but I don't know where to get the appropriate options for this task any suggestions?

    Read the article

  • Effective handling of variables in non-object oriented programming

    - by srnka
    What is the best method to use and share variables between functions in non object-oriented program languages? Let's say that I use 10 parameters from DB, ID and 9 other values linked to it. I need to work with all 10 parameters in many functions. I can do it next ways: 1. call functions only with using ID and in every function get the other parameters from DB. Advantage: local variables are clear visible, there is only one input parameter to function Disadvantage: it's slow and there are the same rows for getting parameters in every function, which makes function longer and not so clear 2. call functions with all 10 parameters Advantage: working with local variables, clear function code Disadvantage: many input parameters, what is not nice 3. getting parameters as global variables once and using them everywhere Advantage - clearer code, shorter functions, faster processing Disadvantage - global variables - loosing control of them, possibility of unwanted overwriting (Especially when some functions should change their values) Maybe there is some another way how to implement this and make program cleaner and more effective. Can you say which way is the best for solving this issue?

    Read the article

  • fail2ban on server with LXC Containers

    - by RoboTamer
    The issue is modprobe and iptables don't work inside an LXC Container. LXC is the userspace control package for Linux Containers, a lightweight virtual system mechanism sometimes described as “chroot on steroids”. iptables error inside the container is: # iptables -I INPUT -s 122.129.126.194 -j DROP > iptables v1.4.8: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I am guessing that it can't work because the LXC containers share one kernel, the main server kernel. How do I do fail2ban in this case. modprobe and iptables work in the main server so I could install it there and link to the logfiles somehow, my guess? Any suggestions?

    Read the article

  • Installing maven on Ubuntu by manual download

    - by WebDevHobo
    To install Maven, I downloaded the latest version from the website and then followed these steps: http://maven.apache.org/download.html#Installation The last step, the version control, does not work. It says that 'mvn' is currently not installed and that I should type sudo apt-get install maven2 If I go directly to the mvn file itself, it does work: root@ubuntu:~# /usr/local/apache-maven/apache-maven-2.2.1/bin/mvn --version Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_21 Java home: /usr/java/jdk1.6.0_21/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux" version: "2.6.32-25-generic" arch: "i386" Family: "unix" So, what am I doing wrong here? Or what would and apt-get install do extra that I might have forgotten?

    Read the article

  • Samba as a PDC and offline authentication

    - by Aimé Barteaux
    Say I have a Windows laptop which has been connected to a domain. The domain has a Samba server as a PDC. Now say that I move the laptop outside of the network (the network is completely inaccessible). Will I be able to logon into accounts I have accessed before on the laptop (through GINA)? Update: Looking at the smb.comf documentation I noticed the setting winbind offline logon: This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache.. To me it looks like this solves the issue but can anyone else confirm it and/or point out if any additional values need to be set?

    Read the article

  • Web based console connections not working in Windows 7 posted: Jan 20, 2010 8:55 AM

    - by nmeth
    For slightly complicated reasons we tend to give people console access to VMs via the webui. This has worked fine in the past, however when the users update their client machines to Windows 7 (or Vista, I am told, although I have not tested that), then the console fails to work. On IE8, having allowed the ActiveX control, the tab causes a "Internet Explorer has stopped working" dialog. On Firefox 3.5 , once the plugin has been installed, using the console causes the browser to crash. I've updated to the most recent VC 2.5 release, and ESX 3.5u5. Anyone else seeing this? Any clues how to get round it (other than using the fat client). Nigel.

    Read the article

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • How to disable all window borders from the VLC playback window

    - by Rob Thomas
    Is there a way to make the VLC playback window completely borderless (no title bar, no other borders)? Ideally, I would like the playback window to be completely borderless and then a separate window that has the controls (play, pause, timeline control, etc). UPDATES: I cannot use full screen mode, I would like playback window to be sized the same as the video, which is usually about 300x300 px. Also, I need to be able to position the window anywhere on the desktop. I'm using the Windows version of VLC.

    Read the article

  • Which home automation technology to choose? x10 zwave e.t.c. in UK [closed]

    - by Stewart Robinson
    I work away most of the week and have to leave my house empty. I would like to have home automation monitor and control my house. Naturally I want to be a bit of a geek and do some of it myself so I've researched stuff like x10, zwave, cbus e.t.c. but I want opinions on which I should use in my house. I have a Linux box that could be used to actually do controlling if needed. So which technology and why?

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >