Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 562/1159 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • How do I force specific permissions for new files/folders on Linux file server?

    - by humble_coder
    I'm having an issue with my install of Ubuntu 9.10 (file server) and its samba permissions. Logging in and reading works fine. However, creation of new directories by users restricts access for other users. For instance, if Bob (Windows user who maps the drive) creates a folder in the directory, Jane (Mac user that simply smb mounts) can read from it, but can't write to it -- and vice versa. I then must go CHMOD 777 the directory for everyone to be happy. I've tried editing the "create/directory mask", and "force" options in the smb.conf file but this doesn't seem to help. I'm about to resort to CRONTABing a recursive chmod routine, although I'm sure this isn't the fix. How do I get all new items to always be 777? Does anyone have any suggestions to fix this ever-occurring situation? Best

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 error

    - by Japie Bosman
    I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; The thing is there is not such property to set, so how do you set the hostname at run time? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

  • Vector with Constant-Time Remove - still a Vector?

    - by Darrel Hoffman
    One of the drawbacks of most common implementations of the Vector class (or ArrayList, etc. Basically any array-backed expandable list class) is that their remove() operation generally operates in linear time - if you remove an element, you must shift all elements after it one space back to keep the data contiguous. But what if you're using a Vector just to be a list-of-things, where the order of the things is irrelevant? In this case removal can be accomplished in a few simple steps: Swap element to be removed with the last element in the array Reduce size field by 1. (No need to re-allocate the array, as the deleted item is now beyond the size field and thus not "in" the list any more. The next add() will just overwrite it.) (optional) Delete last element to free up its memory. (Not needed in garbage-collected languages.) This is clearly now a constant-time operation, since only performs a single swap regardless of size. The downside is of course that it changes the order of the data, but if you don't care about the order, that's not a problem. Could this still be called a Vector? Or is it something different? It has some things in common with "Set" classes in that the order is irrelevant, but unlike a Set it can store duplicate values. (Also most Set classes I know of are backed by a tree or hash map rather than an array.) It also bears some similarity to Heap classes, although without the log(N) percolate steps since we don't care about the order.

    Read the article

  • Setting XSL-FO XML Schema in Visual Studio

    - by Lukasz Kurylo
    I'm playing lately with an XSL-FO for generating a pdf documents. XSL-FO has a long list of available tags and attributes, which for a new guy who want to create a simple document is a nightmare to find a proper one. Fortunatelly we can set an schema for XSL-FO, so will result in acquire a full intellisense in VS. For a simple *.fo file, we can set the path to the schema directly in file: <?xml version="1.0" encoding="utf-8"?> <fo:root       xmlns:fo="http://www.w3.org/1999/XSL/Format"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation=" http://www.w3.org/1999/XSL/Format http://www.xmlblueprint.com/documents/fop.xsd"> ...   We can of course use the build in VS XML Schemas selector. To use it, we must copy the schema file to the Schemas catalog (defaut path for VS2012 is C:\Program Files (x86)\Microsoft Visual Studio 11.0\Xml\Schemas). Then we can go to Properties of the opened xml/xslt file and set the new added schema to file:                 From now, we should have an enable intellisense as shown below: .

    Read the article

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • How/where to find all Update (msu) packages for Windows 7 (Enterprise)

    - by Rudi
    Hi, I've been using Wim2Vhd to create native boot vhd files. (I'm using this to keep several development environments ready, I'm a developer, I know -- I need help ;-) Now the first boot always ends up in several minutes installing all of the windows updates... I' know I could avoid this if I had a massive list of qfe (.msu) packages that I could supply to the command line of Wim2Vhd. Where can I find all of the updates so I can download them. I have found this: http://www.megaleecher.net/Downloading_Microsoft_Windows_7_Offline_Updates but I'd like a more official source and I'd like an easier way to download them all. I'm a developer, so if I find the information (web service?) to get all the current packages, I'll build a tool to download them to a single location and generate a list of them for use with Wim2Vhd.

    Read the article

  • can I force server to always use turboboost?

    - by javapowered
    I'm using HP DL360p Gen8 with 2 * Xeon E5-2640. I do not load CPU 100%, i load it only ~10% and so I guess turboboost is not activated. However I'm using my server for trading so I absolutely don't care about CPU loading but I always want to process my data asap. So I want server to operate using maximum 3 GHz. I.e. 90% of CPU time I don't have anything to process. 10% of CPU time I have data to process. But I need to process it ASAP. I need every single microsecond. So I want server to operate always at maximum "turboboosted" mode. Is it possible?

    Read the article

  • SharePoint Unit Testing and Load Testing Finally?

    - by Kit Ong
    It has always been a real pain to incorporate extensive SharePoint Unit Testing and Load Testing in a project, could Visual Studio 2012 finally make this easier? It certaining looks like it, here's a brief overview on SharePoint support in Visual Studio 2012. Load testing – We now support load testing for SharePoint out of the box. This is more involved than you might imagine due to how dynamic SharePoint is. You can’t just record a script and play it back – it won’t work because SharePoint generates and expects dynamic data (like GUIDs). We’ve built the extensions to our load testing solution to parse the dynamic SharePoint data and include it appropriately in subsequent requests. So now you can record a script and play it back and we will dynamically adjust it to match what SharePoint expects.Unit testing – One of the big problems with unit testing SharePoint is that most code requires SharePoint to be running and trying to run tests against a live SharePoint instance is a pain. So we’ve built a SharePoint “emulator” using our new VS 2012 Fakes & Stubs capability. This will make unit testing of SharePoint components WAY easier.Read more in the link belowhttp://blogs.msdn.com/b/bharry/archive/2012/09/12/visual-studio-update-this-fall.aspx

    Read the article

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • Mapping dynamic buffers in Direct3D11 in Windows Store apps

    - by Donnie
    I'm trying to make instanced geometry in Direct3D11, and the ID3D11DeviceContext1->Map() call is failing with the very helpful error of "Invalid Parameter" when I'm attempting to update the instance buffer. The buffer is declared as a member variable: Microsoft::WRL::ComPtr<ID3D11Buffer> m_instanceBuffer; Then I create it (which succeeds): D3D11_BUFFER_DESC instanceDesc; ZeroMemory(&instanceDesc, sizeof(D3D11_BUFFER_DESC)); instanceDesc.Usage = D3D11_USAGE_DYNAMIC; instanceDesc.ByteWidth = sizeof(InstanceData) * MAX_INSTANCE_COUNT; instanceDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; instanceDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; instanceDesc.MiscFlags = 0; instanceDesc.StructureByteStride = 0; DX::ThrowIfFailed(d3dDevice->CreateBuffer(&instanceDesc, NULL, &m_instanceBuffer)); However, when I try to map it: D3D11_MAPPED_SUBRESOURCE inst; DX::ThrowIfFailed(d3dContext->Map(m_instanceBuffer.Get(), 0, D3D11_MAP_WRITE, 0, &inst)); The map call fails with E_INVALIDARG. Nothing is NULL incorrectly, and this being one of my first D3D apps I'm currently stumped on what to do next to track it down. I'm thinking I must be creating the buffer incorrectly, but I can't see how. Any input would be appreciated.

    Read the article

  • Is there a special way to set up Garry's Mod for Mac? [closed]

    - by credford
    I just downloaded Garry's Mod from Steam and every time I try to play Single Player on the first map, it crashes on the Loading Resources section of the progress bar. Here is some of the exception information the system prints out: Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x000000001acec4d0 Crashed Thread: 4 When I ran it on the Windows side (Bootcamp), Windows 7 required me to give Garry's Mod some permissions. I wasn't asked to do this on the Mac side. Maybe that's it? Is there some kind of special setup for the Mac?

    Read the article

  • What is the difference between apt-get and dpkg?

    - by William F. Hammond
    Both apt-get and dpkg can be used to install and remove packages. When to use which? Context: I'm in stuck in package limbo between 10.04.4 LTS and 12.04.1 LTS after an attempted upgrade via the package manager. For example, to fix things I wanted to remove "skype" so that things it depends on could be freed up. But "aptitude" (my normal package management tool) refused to remove it. The advice at http://administratosphere.wordpress.com/2011/11/02/rescuing-an-interrupted-ubuntu-upgrade/ seems helpful but not adequate to resolve my package conflicts. Also there's a strange thing where the grub menu seems not to be properly interpreted, but eventually I get the splash screen with "/ is not ready yet or not present. Continue to wait; or press S to skip or M to recover manually." Manual recovery puts me in a single user shell where I can easily remount / as rw and bring up the network. If I become myself, the command line seems quite robust, but, there seems to be no way to get X11 going.

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • debian - running unattended-upgrades on a particular day of the week

    - by dastra
    We're running unattended-upgrades on debian squeeze, and would like it to run once a week, only on a Wednesday morning. To attempt this, we have set: APT::Periodic::Unattended-Upgrade "7" in /etc/apt/apt.conf.d/50unattended-upgrades And then touched the /var/lib/apt/periodic/update-stamp to set the timestamp to a Wednesday, for instance: touch -t 201211280000 /var/lib/apt/periodic/update-stamp Running: stamp=$(date --date=$(date -r /var/lib/apt/periodic/update-stamp --iso-8601) +%s 2/dev/null) date -u --date="1970-01-01 $stamp sec GMT" Gives the correct timestamp: Wed Nov 28 00:00:00 UTC 2012 However, unattended-upgrades then seems to ignore this, and run the updates on a Saturday morning. Could anyone enlighten me as to how this parameter works, and how to set up upgrades to run on a Wednesday?

    Read the article

  • Who spotted the omission?

    - by olaf.heimburger
    In my entry OFM 11g: Install OAM 10.1.4.3 (32-bit) on 64-bit RedHat AS 5 I explained how to install OAM 10.1.4.3 (32-bit) on 64-bit RedHat. This is great and works. If you seriously want to use OAM 10.1.4.3 you should consider OHS 11g 32-bit. But this installation is a bit tricky. Nearly all tricks to get this done are described in the above mentioned entry. Today I realized that I missed a small bit to get the installation successfully done.The missing part is within the script to create a vital piece of the OHS 11g package. This part is called genclientsh and resides in $OHS_HOME/bin. This script uses gcc to link binaries. By default this script works great, but on a 64-bit Linux it fails. To get around this, find the variable LD and change the value of gcc to gcc -m32.Done. Caveat On support.oracle.com you will find a Note that suggests to build a small shell script named gcc and includes the -m32 switch. Actually, I consider this as dangerous, because we are humans and tend to forget things quickly. Building a globally available script that changes things for a single setup has side effects that will result in unpredictable results.

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • switch's mgmt-ip is not remotely reachable.

    - by RainDoctor
    Switch model: Netgear FSM7352PS mgmt-ip: 192.168.1.100/24 Vlan id: 1 (default) There are couple of hosts in this Vlan: 192.168.1.2 (esxi console), for instance. 192.168.1.1 is the firewall/router interface. I can ping 192.168.1.1 and 192.168.1.2 from other vlans, say, 172.31.0.0/24 I can ssh to 192.168.1.2 from 172.31.0.0/24 I can't ping 192.168.1.100 from 172.31.0.0/24 However, I can ping 192.168.1.100 from 192.168.1.2 or from my laptop connected to that vlan (192.168.1.11). I can connect to the web GUI from my laptop when I am in that Vlan. Can anyone shed some light on why I am not able to connect from other vlans?

    Read the article

  • Connect to bluetooth device from command line

    - by Ilari Kajaste
    Background: I'm using my bluetooth headset as audio output. I managed to get it working by the long list of instructions on BluetoothHeadset community documentation, and I have automated the process of activating the headset as default audio output into a script, thanks to another question. However, since I use the bluetooth headset with both my phone and computer (and the headset doesn't support two input connections) in order for the phone not to "steal" the connection when handset is turned on, I force the headset into a discovery mode when connecting to the computer (phone gets to connect to it automatically). So even though the headset is paired ok and would in "normal" scenario autoconnect, I have to always use the little bluetooth icon in the notification area to actually connect to my device (see screenshot). What I want to avoid: This GUI for connecting to a known and paired bluetooth device: What I want instead: I'd want to make the bluetooth do exactly what the clicking the connect item in the GUI does, only by using command line. I want to use command line so I can make a single keypress shortcut for the action, and would't need to navigate the GUI every time I want to establish a connection to the device. The question: How can I attempt to connect to a specific, known and paired bluetooth device from command line? Further question: How do I tell if the connection was successful or not?

    Read the article

  • Are there any HTPC-optimized web browsers?

    - by smackfu
    Features that I would ideally include in "HTPC-optimized": Full-screen. Navigable using a remote or the keyboard arrows. Legible at couch distances. Or, to put it another way, imagine the design requirements for Hulu Desktop or XBMC or WMC, applied to a web browser. Opera on a Wii meets most of these criteria but not being HD wastes a lot of potential. If a single solution doesn't exist, is there some combination of Firefox add-ons that will get me there?

    Read the article

  • PostgreSQL 9: Does Vacuuming a table on the primary replicate on the mirror?

    - by Scott Herbert
    Running PostgreSQL 9.0.1, with streaming replication keeping one read-only mirror instance up to date. Auto-vaccuum is on on the primary, except for a few tables which are not vacuumed by the auto-vacuum daemon, in an effort to reduce business-hour IO. These tables are "materialised views". Each night at midnight, we run a vacuum across the database in order to clean up those tables that are excluded from the auto-vacuum. I'm wondering if that process replicates across to the mirror, or if I need to set up vacuum on the mirror as well?

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • nagios wrongly reports packet loss

    - by Alien Life Form
    Lately, on my nagios 3.2.3 install (CentOS5, monitoring ~ 300 hosts, 1150 services) has sdtarted to occasionally report high packet loss on 50-60 hosts at a time. Problem is it's bogus. Manual runs of ping (or its own check_ping binary) finds no fault with any of the affected hosts. The only possible cures I found so far are: run all the checks manually (they will succeed but it may act up again on next check) acknowledge and wait for the problem to go away (may take several ours) I suspect (but have no particular reason other than single rescheduled checks succeeding) that the problem may lay with all the checks being mass scheduled together - in which case introducing some jitter in the scheduling (how?) might help. Or it may be something completely different. Ideas, anyone?

    Read the article

  • New SQLOS features in SQL Server 2012

    - by SQLOS Team
    Here's a quick summary of SQLOS feature enhancements going into SQL Server 2012. Most of these are already in the CTP3 pre-release, except for the Resource Governor enhancements which will be in the release candidate. We've blogged about a couple of these items before. I plan to add detail. Let me know which ones you'd like to see more on: - Memory Manager Redesign: Predictable sizing and governing SQL memory consumption: sp_configure ‘max server memory’ now limits all memory committed by SQL ServerResource Governor governs all SQL memory consumption (other than special cases like buffer pool) Improved scalability of complex queries and operations that make >8K allocations Improved CPU and NUMA locality for memory accesses Single memory manager that handles page allocations of all sizes Consistent Out-of-memory handling & management across different internal components - Optimized Memory Broker for Column Store indexes (Project Apollo) - Resource Governor Support larger scale multi-tenancy by increasing Max. number of resource pools20 -> 64 [for 64-bit] Enable predictable chargeback and isolation by adding a hard cap on CPU usage Enable vertical isolation of machine resources Resource pools can be affinitized to individual or groups of schedulers or to NUMA nodes New DMV for resource pool affinity  - CLR 4 support, adds .NET Framework 4 advantages - sp_server_dianostics Captures diagnostic data and health information about SQL Server to detect potential failures Analyze internal system state Reliable when nothing else is working   - New SQLOS DMVs (in 2008 R2SP1) SQL Server related configuration - New DMVsys.dm_server_services OS related resource configurationNew DMVssys.dm_os_volume_statssys.dm_os_windows_infosys.dm_server_registry XEvents for SQL and OS related Perfmon counters Extend sys.dm_os_sys_info See previous blog posts here and here. - Scale / Mission critical Increased scalability: Support Windows 8 max memory and logical processorsDynamic Memory support in Standard Edition - Hot-Add Memory enabled when virtualized - Various Tier1 Performance Improvements, including reduced instructions for superlatches. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • Which is faster for read access on EC2; local drive or EBS?

    - by Phillip Oldham
    Which is faster for read access on an EC2 instance; the "local" drive or an attached EBS volume? I have some data that needs to be persisted so have placed this on an EBS volume. I'm using OpenSolaris, so this volume has been attached as a ZFS pool. However, I have a large chunk of EC2 disk space that's going to go unused, so I'm considering re-purposing this as a ZFS cache volume but I don't want to do this if the disk access is going to be slower than that of the EBS volume as it would potentially have a detrimental effect.

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >