Search Results

Search found 90811 results on 3633 pages for 'hyper v server 2012 r2'.

Page 1635/3633 | < Previous Page | 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642  | Next Page >

  • Check if value of textbox extended with MaskedEditExtender is valid?

    - by Ismail S
    Below is my code: <asp:TextBox ID="FromDateTextBox" runat="server" /> <asp:ImageButton ID="FromDateImageButton" runat="server" ImageUrl="~/images/calander.png" /> <ajaxkit:CalendarExtender ID="FromDate" runat="server" TargetControlID="FromDateTextBox" CssClass="CalanderControl" PopupButtonID="FromDateImageButton" Enabled="True" /> <ajaxkit:MaskedEditExtender id="FromDateMaskedEditExtender" runat="server" targetcontrolid="FromDateTextBox" Mask="99/99/9999" messagevalidatortip="true" onfocuscssclass="MaskedEditFocus" oninvalidcssclass="MaskedEditError" masktype="Date" displaymoney="Left" acceptnegative="Left" errortooltipenabled="True" /> <ajaxkit:MaskedEditValidator id="FromDateMaskedEditValidator" runat="server" controlextender="FromDateMaskedEditExtender" controltovalidate="FromDateTextBox" emptyvaluemessage="Date is required" invalidvaluemessage="Date is invalid" display="Dynamic" tooltipmessage="Input a date" emptyvalueblurredtext="*" invalidvalueblurredmessage="*" validationgroup="MKE" /> I've set Culture="auto" UICulture="auto" in @Page directive and EnableScriptGlobalization="true" EnableScriptLocalization="true" in script manager to have client culture specific date format in my textbox. I also have a Go button on my page on which I will do a partial post back. So, I want to validate the FromDateTextBox in javascript when the Go button is clicked.

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • socket.setdefaulttimeout interacting with M2Crypto connection

    - by Becky
    Hello - I'm making a secure SSL connection to a server using python and M2Crypto. See code below. from M2Crypto import SSL, m2,x509 from M2Crypto.m2xmlrpclib import Server, SSL_Tranport ctx = SSL.Context() m2.ssl_ctx_use_pkey_privkey(ctx.ctx,myKey.pkey) m2.ssl_ctx_use_x509(ctx.ctx,myCert.x509) server = Server(serverUrl, SSL_Transport(ctx)) server.ping() The above works fine. If I try to change the default socket timeout by adding the following two lines at the beginning of the code, I get a protocol error. import socket socket.setdefaulttimeout(40) This is the error I receive: File "/usr/local/lib/python2.4/xmlrpclib.py", line 1096, in call return self._send(self._name, args) File "/usr/local/lib/python2.4/xmlrpclib.py", line 1383, in _request verbose=self._verbose File "/usr/local/lib/python2.4/site-packages/M2Crypto/m2xmlrpclib.py", line 68, in request headers xmlrpclib.ProtocolError: Why is the default socket timeout causing problems?

    Read the article

  • Socket Communication in C#- IP-Port

    - by Ahmet Altun
    I am testing my socket programs at home, in local network. Server and client programs are running on seperate machines. Server program socket is binded as: serverSocket.Bind(new IPEndPoint(IPAddress.Parse("127.0.0.1"), 8999)); Client program (on the other computer) is connected as: clientSocket.Connect(IPAddress.Parse("192.168.2.3"), 8999); Why can Client not communicate with server? Do I need to make some firewall configuration or something like that? Or am I writing Server Ip incorrectly to the Client? (I got it from cmd-ipconfig of server)

    Read the article

  • Increase file upload size limit in iis6

    - by JustFoo
    Is there any other place besides the metabase.xml file where the file upload size can be modified? I am currently running a staging server with IIS6 and it is setup to allow uploading of files up to 20mb. This works perfectly fine. I have a new production server where I am trying to setup this same available size limit. So I edited the metabase.xml file and set it to 20971520. Then I restarted IIS and that didn't work. So I then restarted the entire server, that also didn't work. I can upload files around 2mb so it is definitely allowing file sizes larger then the standard 200kb default size. I try uploading a 5mb file and my upload.aspx page completely crashes. Is it possible there is something else I need to configure? The production server is located on a server farm, could there be some limits set on there end? Thanks

    Read the article

  • Wiring up JavaScript handlers after a Partial Postback in ASP.NET

    - by Richard
    I am trying to use LinkButtons with the DefaultButton property of the ASP.NET Panel in an UpdatePanel. I have read and used the various other answers that are around describing the wiring up of the click event so that a full postback is not done instead of a partial postback. When the page loads, I wire up the .click function of the LinkButton so that the DefaultButton property of the ASP.NET panel will work. This all works fine, until you bring an UpdatePanel into the mix. With an UpdatePanel, if there is a partial postback, the script to wire up the .click function is not called in the partial postback, and hitting enter reverts to causing a full submit of the form rather than triggering the LinkButton. How can I cause javascript to be executed after a partial postback to re-wire up the .click function of the LinkButton? I have produced a sample page which shows the problem. There are two alerts showing 1) When the code to hook up the .click function is being called, and 2) When the .click function has been called (this only happens when you hit enter in the textbox after the event has been wired up). To test this code, type something in the textbox and hit Enter. The text will be copied to the label control, but "Wiring up Event Click" alert will not be shown. Add another letter, hit enter again, and you'll get a full postback without the text being copied to the label control (as the LinkButton wasn't called). Because that was a full postback, the Wiring Up Event Click event will be called again, and the form will work properly the next time again. This is being done with ASP.NET 3.5. Test Case Code: <%@ Page Language="C#" Inherits="System.Web.UI.Page" Theme="" EnableTheming="false" AutoEventWireup="true" %> <script runat="server"> void cmdComplete_Click(object sender, EventArgs e) { lblOutput.Text = "Complete Pressed: " + txtInput.Text; } void cmdFirstButton_Click(object sender, EventArgs e) { lblOutput.Text = "First Button Pressed"; } protected override void OnLoad(EventArgs e) { HookupButton(cmdComplete); } void HookupButton(LinkButton button) { // Use the click event of the LinkButton to trigger the postback (this is used by the .click code below) button.OnClientClick = Page.ClientScript.GetPostBackEventReference(button, String.Empty); // Wire up the .click event of the button to call the onclick function, and prevent a form submit string clickString = string.Format(@" alert('Wiring up click event'); document.getElementById('{0}').click = function() {{ alert('Default button pressed'); document.getElementById('{0}').onclick(); }};", button.ClientID, Page.ClientScript.GetPostBackEventReference(button, "")); Page.ClientScript.RegisterStartupScript(button.GetType(), "click_hookup_" + button.ClientID, clickString, true); } </script> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <title>DefaultButton/LinkButton Testing</title> <style type="text/css"> a.Button { line-height: 2em; padding: 5px; border: solid 1px #CCC; background-color: #EEE; } </style> </head> <body> <h1> DefaultButton/LinkButton Testing</h1> <form runat="server"> <asp:ScriptManager runat="server" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <div style="position: relative"> <fieldset> <legend>Output</legend> <asp:Label runat="server" ID="lblOutput" /> </fieldset> <asp:Button runat="server" Text="First Button" ID="cmdFirstButton" OnClick="cmdFirstButton_Click" UseSubmitBehavior="false" /> <asp:Panel ID="Panel1" runat="server" DefaultButton="cmdComplete"> <label> Enter Text:</label> <asp:TextBox runat="server" ID="txtInput" /> <asp:LinkButton runat="server" CssClass="Button" ID="cmdComplete" OnClick="cmdComplete_Click" Text="Complete" /> </asp:Panel> </div> </ContentTemplate> </asp:UpdatePanel> <asp:Button runat="server" ID="cmdFullPostback" Text="Full Postback" /> </form> </body> </html>

    Read the article

  • Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c

    - by Richard Rotter
    Cloud Computing? Everyone is talking about Cloud these days. Everyone is explaining how the cloud will help you to bring your service up and running very fast, secure and with little effort. You can find these kinds of presentations at almost every event around the globe. But what is really behind all this stuff? Is it really so simple? And the answer is: Yes it is! With the Oracle SW Stack it is! In this post, I will try to bring this down to earth, demonstrating how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing.But let me cover some basics first: How fast can you build a cloud?How elastic is your cloud so you can provide new services on demand? How much effort does it take to monitor and operate your Cloud Infrastructure in order to meet your SLAs?How easy is it to chargeback for your services provided? These are the critical success factors of Cloud Computing. And Oracle has an answer to all those questions. By using Oracle VM for X86 in combination with Enterprise Manager 12c you can build and control your cloud environment very fast and easy. What are the fundamental building blocks for your cloud? Oracle Cloud Building Blocks #1 Hardware Surprise, surprise. Even the cloud needs to run somewhere, hence you will need hardware. This HW normally consists of servers, storage and networking. But Oracles goes beyond that. There are Optimized Solutions available for your cloud infrastructure. This is a cookbook to build your HW cloud platform. For example, building your cloud infrastructure with blades and our network infrastructure will reduce complexity in your datacenter (Blades with switch network modules, splitter cables to reduce the amount of cables, TOR (Top Of the Rack) switches which are building the interface to your infrastructure environment. Reducing complexity even in the cabling will help you to manage your environment more efficient and with less risk. Of course, our engineered systems fit into the cloud perfectly too. Although they are considered as a PaaS themselves, having the database SW (for Exadata) and the application development environment (for Exalogic) already deployed on them, in general they are ideal systems to enable you building your own cloud and PaaS infrastructure. #2 Virtualization The next missing link in the cloud setup is virtualization. For me personally, it's one of the most hidden "secret", that oracle can provide you with a complete virtualization stack in terms of a hypervisor on both architectures: X86 and Sparc CPUs. There is Oracle VM for X86 and Oracle VM for Sparc available at no additional  license costs if your are running this virtualization stack on top of Oracle HW (and with Oracle Premier Support for HW). This completes the virtualization portfolio together with Solaris Zones introduced already with Solaris 10 a few years ago. Let me explain how Oracle VM for X86 works: Oracle VM for x86 consists of two main parts: - The Oracle VM Server: Oracle VM Server is installed on bare metal and it is the hypervisor which is able to run virtual machines. It has a very small footprint. The ISO-Image of Oracle VM Server is only 200MB large. It is very small but efficient. You can install a OVM-Server in less than 5 mins by booting the Server with the ISO-Image assigned and providing the necessary configuration parameters (like installing an Linux distribution). After the installation, the OVM-Server is ready to use. That's all. - The Oracle VM-Manager: OVM-Manager is the central management tool where you can control your OVM-Servers. OVM-Manager provides the graphical user interface, which is an Application Development Framework (ADF) application, with a familiar web-browser based interface, to manage Oracle VM Servers, virtual machines, and resources. The Oracle VM Manager has the following capabilities: Create virtual machines Create server pools Power on and off virtual machines Manage networks and storage Import virtual machines, ISO files, and templates Manage high availability of Oracle VM Servers, server pools, and virtual machines Perform live migration of virtual machines I want to highlight one of the goodies which you can use if you are running Oracle VM for X86: Preconfigured, downloadable Virtual Machine Templates form edelivery With these templates, you can download completely preconfigured Virtual Machines in your environment, boot them up, configure them at first time boot and use it. There are templates for almost all Oracle SW and Applications (like Fusion Middleware, Database, Siebel, etc.) available. #3) Cloud Management The management of your cloud infrastructure is key. This is a day-to-day job. Acquiring HW, installing a virtualization layer on top of it is done just at the beginning and if you want to expand your infrastructure. But managing your cloud, keeping it up and running, deploying new services, changing your chargeback model, etc, these are the daily jobs. These jobs must be simple, secure and easy to manage. The Enterprise Manager 12c Cloud provides this functionality from one management cockpit. Enterprise Manager 12c uses Oracle VM Manager to control OVM Serverpools. Once you registered your OVM-Managers in Enterprise Manager, then you are able to setup your cloud infrastructure and manage everything from Enterprise Manager. What you need to do in EM12c is: ">Register your OVM Manager in Enterprise ManagerAfter Registering your OVM Manager, all the functionality of Oracle VM for X86 is also available in Enterprise Manager. Enterprise Manager works as a "Manger" of the Manager. You can register as many OVM-Managers you want and control your complete virtualization environment Create Roles and Users for your Self Service Portal in Enterprise ManagerWith this step you allow users to logon on the Enterprise Manager Self Service Portal. Users can request Virtual Machines in this portal. Setup the Cloud InfrastructureSetup the Quotas for your self service users. How many VMs can they request? How much of your resources ( cpu, memory, storage, network, etc. etc.)? Which SW components (templates, assemblys) can your self service users request? In this step, you basically set up the complete cloud infrastructure. Setup ChargebackOnce your cloud is set up, you need to configure your chargeback mechanism. The Enterprise Manager collects the resources metrics, which are used in a very deep level. Almost all collected Metrics could be used in the chargeback module. You can define chargeback plans based on configurations (charge for the amount of cpu, memory, storage is assigned to a machine, or for a specific OS which is installed) or chargeback on resource consumption (% of cpu used, storage used, etc). Or you can also define a combination of configuration and consumption chargeback plans. The chargeback module is very flexible. Here is a overview of the workflow how to handle infrastructure cloud in EM: Summary As you can see, setting up an Infrastructure Cloud Service with Oracle VM for X86 and Enterprise Manager 12c is really simple. I personally configured a complete cloud environment with three X86 servers and a small JBOD san box in less than 3 hours. There is no magic in it, it is all straightforward. Of course, you have to have some experience with Oracle VM and Enterprise Manager. Experience in setting up Linux environments helps as well. I plan to publish a technical cookbook in the next few weeks. I hope you found this post useful and will see you again here on our blog. Any hints, comments are welcome!

    Read the article

  • Caching web page data, Database or File

    - by Mahdi
    I am creating an RSS reader application that requests the RSS from my server. I mean, the RSS is first downloaded to my server, then application downloads it from my server. I want to create RSS cache for this. And for example, each RSS would be refreshed every 1 minute. So, If 10 users request RSS of example.com in 1 minute, my server will download it only for the first time, and in other 9 requests, RSS will be loaded from cache. My question is, Should I use a Database (MSSQL) for this purpose? or I should use files? I have no limit in Database size nor in file size... EDIT: I'm using ASP.NET for the server.

    Read the article

  • Zend_XmlRpc :Failed to parse response error

    - by davykiash
    Am trying to get a simple hello world XMLRPC server setup to work.However I get this Failed to parse response error error when I run the test URL http://localhost/client/index/ on my browser In my Rpc Controller that handles all my XMLRPC calls class RpcController extends Zend_Controller_Action { public function init() { $this->_helper->layout->disableLayout(); $this->_helper->viewRenderer->setNoRender(); } public function xmlrpcAction() { $server = new Zend_XmlRpc_Server(); $server->setClass('Service_Rpctest','test'); $server->handle(); } } In my client Controller that calls the XMLRPC Server class ClientController extends Zend_Controller_Action { public function indexAction() { $clientrpc = new Zend_XmlRpc_Client('http://localhost/rpc/xmlrpc/'); //Render Output to the view $this->view->rpcvalue = $clientrpc->call('test.sayHello'); } } In my Service_Rpctest function <?php class Service_Rpctest { /** * Return the Hello String * * @return string */ public function sayHello() { $value = 'Hello'; return $value; } } What am I missing?

    Read the article

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to get it to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe 31-10-2012 ... I have some more updates. When I do the following command it does see my Wifi router... So even if it is still disabled... the card seems to work and see the router (ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E") See below: sudo iwlist wlan1 scanning wlan1 Scan completed : Cell 01 - Address: 00:19:70:8F:B0:EA Channel:10 Frequency:2.457 GHz (Channel 10) Quality=51/70 Signal level=-59 dBm Encryption key:on ESSID:"5791BC26-CE9C-11D1-97BF-0000F81E" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000025dbf2188 Extra: Last beacon: 108ms ago IE: Unknown: 002035373931424332362D434539432D313144312D393742462D3030303046383145 IE: Unknown: 010882848B960C121824 IE: Unknown: 03010A IE: Unknown: 0706424520010D14 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101030003A4000027A4000042435E0062322F00 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000

    Read the article

  • Click edit button twice in gridview asp.net c# issue

    - by Supriyo Banerjee
    I have a gridview created on a page where I want to provide an edit button for the user to click in. However the issue is the grid view row becomes editable only while clicking the edit button second time. Not sure what is going wrong here, any help would be appreciated. One additional point is my grid view is displayed on the page only on a click of a button and is not there on page_load event hence. Posting the code snippets: //MY Aspx code <Columns> <asp:TemplateField HeaderText="Slice" SortExpression="name"> <ItemTemplate> <asp:Label ID="lblslice" Text='<%# Eval("slice") %>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lblslice" Text='<%# Eval("slice") %>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Metric" SortExpression="Description"> <ItemTemplate> <asp:Label ID="lblmetric" Text='<%# Eval("metric")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lblmetric" Text='<%# Eval("metric")%>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Original" SortExpression="Type"> <ItemTemplate> <asp:Label ID="lbloriginal" Text='<%# Eval("Original")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lbloriginal" Text='<%# Eval("Original")%>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="WOW" SortExpression="Market"> <ItemTemplate> <asp:Label ID="lblwow" Text='<%# Eval("WOW")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lblwow" Text='<%# Eval("WOW")%>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Change" SortExpression="Market" > <ItemTemplate> <asp:Label ID="lblChange" Text='<%# Eval("Change")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="TxtCustomerID" Text='<%# Eval("Change") %> ' runat="server"></asp:TextBox> </EditItemTemplate> </asp:TemplateField> <asp:CommandField HeaderText="Edit" ShowEditButton="True" /> </Columns> </asp:GridView> //My code behind: protected void Page_Load(object sender, EventArgs e) { } public void populagridview1(string slice,string fromdate,string todate,string year) { SqlCommand cmd; SqlDataAdapter da; DataSet ds; cmd = new SqlCommand(); cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "usp_geteventchanges"; cmd.Connection = conn; conn.Open(); SqlParameter param1 = new SqlParameter("@slice", slice); cmd.Parameters.Add(param1); SqlParameter param2 = new SqlParameter("@fromdate", fromdate); cmd.Parameters.Add(param2); SqlParameter param3 = new SqlParameter("@todate", todate); cmd.Parameters.Add(param3); SqlParameter param4 = new SqlParameter("@year", year); cmd.Parameters.Add(param4); da = new SqlDataAdapter(cmd); ds = new DataSet(); da.Fill(ds, "Table"); GridView1.DataSource = ds; GridView1.DataBind(); conn.Close(); } protected void ImpactCalc(object sender, EventArgs e) { populagridview1(ddl_slice.SelectedValue, dt_to_integer(Picker1.Text), dt_to_integer(Picker2.Text), Txt_Year.Text); } protected void GridView1_RowEditing(object sender, GridViewEditEventArgs e) { gvEditIndex = e.NewEditIndex; Gridview1.DataBind(); } My page layout This edit screen appears after clicking edit twice.. the grid view gets displayed on hitting the Calculate impact button. The data is from a backend stored procedure which is fired on clicking the Calculate impact button

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • ASP.NET MVC project won't start under IIS 5.1 on Windows XP SP3

    - by mrjoltcola
    I've a ASP.NET MVC 2 project that runs fine under Windows 7 and will start on Windows XP if I use the Visual Studio Development Server, however, starting under IIS generates an error: Unable to start debugging on the web server With the message The specified procedure could not be found No errors in the system event viewer. If I start without debugging I get an "HTTP 500 Internal Server Error" The reason I run it under IIS is the project also includes some WCF wsHttp web services that use certificates, so the VS Development Server is not adequate for hosting those. I have already seen the links on SO that talk about adding the wildcard mapping. I've already done that, just as I've done on Windows Server 2003 where I successfully host ASP.NET MVC RC2 for quite a while.

    Read the article

  • Silence output from SimpleXMLRPCServer

    - by Corey Goldberg
    I am running an xml-rpc server using SimpleXMLRPCServer from the stdlib. My code looks something like this: import SimpleXMLRPCServer import socket class RemoteStarter: def start(self): return 'foo' rs = RemoteStarter() host = socket.gethostbyaddr(socket.gethostname())[0] port = 9000 server = SimpleXMLRPCServer.SimpleXMLRPCServer((host, port)) server.register_instance(rs) server.serve_forever() every time the 'start' method gets called remotely, the server prints an access line like this: <server_name> - - [10/Mar/2010 13:06:20] "POST /RPC2 HTTP/1.0" 200 - I can't figure out a way to silence the output so it doesn't print these access lines to stdout. anyone?

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Cross Domain Post - Losing POST Data

    - by Tomas Beblar
    I have 2 servers, both running R2 / IIS7 / ASP Classic sites (can't get around any of that) Server A is making the follow calls: Dim objXMLHTTP, xml Set xml = Server.CreateObject("Msxml2.ServerXmlHTTP.6.0") xml.Open "POST", templateName, false xml.setRequestHeader "Content-Type", "application/xml" xml.Send variables Where the templateName is the URL of Server B (It's an email template) ... and variables are a name value pair string like a query string password=myPassword&customerEmail=Dear+Bob,.... Server B receives the POST but all the POST data (password=myPassword&customerEmail=Dear+Bob,....) is missing from the POST password = Request.Form("templatePassword") customerEmail = Request.Form("RackAttackCustomerEmail") The above values are all empty. Here's the kicker. This all worked on our old servers (Windows Server 2003, IIS 6) But when we migrated over, this stopped working correctly. My question is: What would cause the POST data to be dropped in IIS 7 when it all worked in IIS 6? I've done about 3 days of research into this trying many different things and nothing has worked. The POST data is just gone.

    Read the article

  • System.Web.AspNetHostingPermission Exception on New Deployment

    - by Jason N. Gaylord
    I have a friend that is moving a web application from one server over to another. The new server has the same settings as the first server, however, he's running into a Security issue. Here's the error details: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. The Event Viewer does not point to anything specific in the web.config file or anything. The web applicaiton is on the C: drive. This is a Windows Server 2008 R2 x64 server with a brand new IIS 7 installation. IIS is set in classic mode for this app pool.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • What exactly happens when I call a web service method using PHP::SOAP?

    - by Bedwyr Humphreys
    Say I have a simple client/server scenario with one method: // client code $client = new SoapCLient("service.wsdl"); $result = $client.getPi(); ... // server code function getPi(){ return 3.141; } $server = new SoapServer("service.wsdl"); $server.addFunction("getPi"); $server.handle(); Am I right in thinking that when the client makes a call to the getPi() method the addfunction() gets called everytime? Is this really how PHP SOAP web services work? Or is there some caching going on? Thanks.

    Read the article

  • Java Socket fails to transmit data over the network

    - by Mark Griffin
    I'm experiencing a bizarre problem with sockets between a Java Knopflerfish client bundle and a PHP (CLI, not web) server. The client/server pair work fine when both are located on the localhost, and all data is transmitted successfully. However, when the Java client exists on a different machine, connections to the server are successful, but no data is received by the PHP script. Packet analysis confirms that the data sent by the Java client is received on by the server - PHP just seems to have problems getting its hands on it. As a further note, I've done some tests with telnet as the client. The PHP server script receives all data fine from any host. This leads me to believe that the problem has something to do with the way java is setting up the socket or that there is some networking issue that I'm not familiar with. Any thoughts would be appreciated. Can post code samples if desired.

    Read the article

  • How to create a persistent connection using AS3 Sockets for a chat client

    - by Vivek
    I want to create a chat client in flash/flex for a chat server,something like a MUD/MOO client but I'm unable to create a persistent connection . I've been using the AS3 Socket class,but I'm getting disconnected from the server side,soon after the connection is made but the client still shows the 'connected' property as true .The server is asynchronous and was written in python using asyncore/asynchat, it works fine with most open source MOO/MUD clients . I tried connecting my program to a simple synchronous echo server,here both read and write worked fine with no disconnections from either side . So my question is how do I make a persistent connection with the server?

    Read the article

  • php cgi htaccess

    - by msaif
    i try to execute cgi but failed. i include following lines in .htaccess AddHandler cgi-script .cgi Options +ExecCGI abc.com/ is equivalent to the /home directory abc.com/compare is equivalent to the /home/compare directory abc.com/compare/contact is equivalent to the /home/compare/contact directory .htaccess file located in contact directory but server returns the following message. Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request.Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.More information about this error may be available in the server error log. what is the problem one more thing is can i see phpinfo() that cgi is enable or not??

    Read the article

  • Need to copy remotely hosted file vis Shell Command

    - by pnm123
    Hello, There is a file that hosted remotely on a server that is not supporting Shell Access. I bought a new server that supports Shell Access so now I want to copy a file that is on the non-supporting server to new server via a Shell Command using Putty. File url is like this http://www.domain.com/file.gzip and it is username/password protected. If I be more specified, I want to copy a backup of a home directory from cPanel to my new server via Shell command. I have done this few months ago but I don't remember it now and also I failed to google it. Thank you, Prasad

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

< Previous Page | 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642  | Next Page >