Search Results

Search found 18773 results on 751 pages for 'router configuration'.

Page 255/751 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • How to share internet connection on Mac os x to Virtualbox vm's using Host-only

    - by redben
    In one line : is the following possible : Airport <- osx bridge - vbox-Host-only - vm's On a mac os x, i have virtual box with a virtual machine. For now i have configured 2 interfaces for my virtual machine eth0 is normal bridge for my vm to acces the internet (when airport is connected) eth1 is set to host-only so i can access my vm from the host when there is no wifi/aiport is down. So basically it's like Adapter 1 when there is Wifi, Adapter 2 when there is not. I'd like to have only one configuration to make it simpler. I thought i could just keep the Host only configuration, and on the host (os x) go to internet sharing and select "share from airport" to vboxnet0 (the vb virtual interface). Only to find out that vboxnet0 dosn't show up in the interfaces list on os x preferences. I know that on a linux host you could install something called bridge-utils and use that to bridge the two insterfaces. Is there any thing like that for Mac ?

    Read the article

  • monitoring nfs with monit

    - by Josh Nankin
    I'd like to monitor NFS mounts and the NFS server process using Monit. On the server, I'd need a PID file, but I can't seem to find a way of getting that created with existing configuration files. Is there a way to do this, or has anyone monitored the server in a different way (checking if port 53 is active, etc). On clients, I was thinking of making Monit simply look for a specific file in an NFS mount, and if it's accessible, all is well. Problem is, if the NFS server does go down, file requests usually hang (perhaps even indefinitely, not sure). How would one get around this issue with monit? Any configuration examples would be greatly appreciated!

    Read the article

  • What does Embedded SATA Controller : ATA mean?

    - by paulH
    I have a Poweredge R510 server with a PERC H700 Integrated RAID controller that is exhibiting slower than expected disk speeds (RAID 1 and RAID 10 arrays) and I'm looking at the configuration of the server. Running the command omreport chassis biossetup on the server shows me the following configuration setting: Embedded SATA Controller : ATA I can also see that the possible options for this setting are: off | ata | qdma | raid I've been looking online to find out what this setting means and what the various options refer to but I've been unable to find anything particularly helpful, so I was hoping that somebody here could help to enlighten me. Thanks, Paul.

    Read the article

  • Linux route add between static LAN and Wifi Gateway

    - by Hamza
    I have two local machines connected to each other via wired ethernet and one of those machines is also connected to a wifi network which provides internet access. A pseudo-graphical representation of the topology is as follows: (PC2)----------(PC1)---------(Wifi Gateway) 192.168.2.x 10.0.0.x The configuration on PC2 is: iface eth0 inet static address 192.168.2.2 network 192.168.2.0 netmask 255.255.255.0 gateway 192.168.2.1 ...and the configuration on PC1 is: iface eth0 inet static address 192.168.2.1 network 192.168.2.0 netmask 255.255.255.0 gateway 192.168.2.1 On PC1, I've added a default route for wlan0 as I couldn't access the internet otherwise: route add default gw 10.0.0.1 wlan0 And also tried setting the gateway for the 192.168.2.x network using: route add -net 192.168.2.0 netmask 255.255.255.0 gw 10.0.0.1 But I still can't access internet from PC2. Edit I don't have access to the wifi gateway.

    Read the article

  • Update: GTAS and EBS

    - by jeffrey.waterman
    Provided below are updated target date timeframes for provided patches for upcoming legislative enhancements.   Dates have been pushed out from previous dates provided due to changes in Treasury mandatory dates.  Mandatory dates for GTAS and IPAC have changes since previous target dates for patches were provided.   These are target dates, not commitments to deliver functionality. Deliverable Target Timeframes for Customer Patches Comments R12 GTAS Configuration Apr 2012 Patch is available GTAS Key Processes Oct/Nov 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Nov/Dec 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC Apr/May 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes May/Jun  2013 Includes GTAS Customer and Vendor  update processes. IPAC Aug/Sep Includes modification required to IPAC to accommodate Componentized TAS and BETC. 11i GTAS Configuration May 2012 Patch is available GTAS Key Processes Nov/Dec 2012 Includes GTAS processes necessary to create the GTAS interface file, migration of FACTS balances to GTAS, GTAS Trial Balance, and GTAS Transaction Register. GTAS Reports Dec/Jan 2012 GTAS Trial Balance GTAS Transaction Register Capture of Trading Partner TAS/BETC May/Jun 2013 Includes modification necessary to capture BETC, Trading Partner TAS/BETC on relevant transactions. GTAS Other Processes Jun/Jul 2013 Includes GTAS Customer and Vendor  update processes. IPAC Sep/Oct 2013 Includes modification required to IPAC to accommodate Componentized TAS and BETC.

    Read the article

  • two possible wifi devices competing, one is hard blocked

    - by patrickmw
    blacklisted acer_wmi because that was showing up in the rfkill list then ideapad_wlan was listed $ rfkill list wifi 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 3: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes $ lshw -C network *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: f0:de:f1:12:21:e9 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.139 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:42 memory:f0400000-f043ffff ioport:2000(size=128) *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: ac:81:12:38:ba:89 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff I'm not sure how to disable the wifi devices independently. I'm also not sure which device is the correct one. I think its the brcmw device. Any suggestions?

    Read the article

  • USB to USB CD ROM emulator

    - by JohnnyLambada
    I'm wondering if anyone knows of a CDROM emulator that runs on Linux. I want to emulate this configuration: [CDROM DRIVE]----USB CABLE----[COMPUTER UNDER TEST] Where [COMPUTER UNDER TEST] is a computer that boots from a physical CD inserted into the [CDROM DRIVE]. Only instead of the [CDROM DRIVE] I want the following configuration: [CD IMAGE BUILD MACHINE]-----USB CABLE-----[COMPUTER UNDER TEST]. I want to build an ISO image on the [CD IMAGE BUILD MACHINE] and have some sort of USB CDROM emulator running on it to serve up the ISO image to the [COMPUTER UNDER TEST] as though it was talking to the [CDROM DRIVE]. Does this exist? If it does, I can't find it. I want to do this so I can test out bootable CDs without burning a lot of coasters.

    Read the article

  • Oracle Virtualization Friday Spotlight - October 18, 2013

    - by Monica Kumar
    Opening The Oracle VM Templates Blackbox Oracle VM Templates give you the efficiency of speed and the assurance of no guess work. For those in the know, Oracle VM Guest Additions is a great way to empower you to do more interesting things with the Templates. Today’s blog article is to share the secrets with those who are not content with just treating Oracle VM Templates as a black box. Oracle VM Guest Additions is a set of packages that can be installed on the guest operating system of a virtual machine running in the Oracle VM environment. These packages provide the tools to allow bi-directional communication directly between the Oracle VM Manager and the operating system running within the virtual machine. OK here’s where the ‘power-user’ part comes in…. This gives your fine-grained control over the configuration and behavior of components running within the virtual machine directly from Oracle VM Manager. You now have the ability to see and direct what goes on inside your VM from Oracle VM Manager. Get a reporting on IP addressing Use the template configuration facility to automatically configure virtual machines as they are first started Send messages directly to a virtual machine to trigger programmed events Query a virtual machine to obtain information pertaining to previous messages Enough of the theory! To get hands-on how-to’s and talk directly with the product expert on Oracle VM Guest Additions, Robbie de Meyer, or Oracle VM Templates for Oracle Database and RAC Template expert Saar Maoz, join us for the Oct 24th live webcast. You can also read more about the Oracle VM Guest Additions in the whitepaper.

    Read the article

  • Apache2 + Php + Pthreads HowTos

    - by Drug
    04 LTS 64 bit. What I would really love to do is sudo apt-get install libapache2-mod-php5 but compile PHP with --enable-maintainer-zts so I could later install pthreads with pecl install pthreads. Sadly I understand that it is not possible. I know that the easiest way is to recompile PHP together with apache support and zts. However I really like the way the standard Ubuntu PHP package is configured and I am used to the path`s for CLI php.ini config, Apache php.ini config and other paths for modules and files that this Ubuntu package defines. So i just want to change the package source a little bit and install it. # Get the stuff necessary to build the package sudo apt-get build-dep php5-common # Get the package source sudo apt-get source php5-common At this point I am getting sources not for the php5-common package but the whole php5 package. If I would sudo make && make install at this point, would it mean that I am installing a lot of unnecessary stuff? # Add configuration options ./configure --enable-maintainer-zts Does this mean that I am appending a configuration option? Or am I generating a whole new config? Alternative at this point Is there a way of getting the config options that this package defines, so that I can grab a php source from php.net and compile it with $ ./configure --prefix=package_prefix \ // Option 1 from package --enable-embed \ // Option 2 from package --with-regex=php \ // Option 3 from package Continuing the main idea ... Solution 1 # Compile (Not compiling) sudo make && make install Will I be building PHP with EVERYTHING at this point? If I compile like this, I will not be able to remove the mess I made using sudo apt-get purge php5? Solution 2 # ReCompile the package dpkg-buildpackage -rfakeroot -uc -b This does not compile also. Please correct my steps, so I can install everything correctly.

    Read the article

  • Best Persistence choice for J2EE-App with frequently changing Data Model

    - by Ben-G
    Whenever I develop a J2EE-Application, I at some point decide to switch from my dummy Persistence (Simply Using Lists and other Data Structures) to some Sort of Database Persistence. Mostly when I hope the Data Model is more or less complete. From this point on, changes to the data model become exhausting, but unluckily they occur rather often. I've used different Object-Relational-Mappers (iBatis, Hibernate) for my projects. They definitely reduce the pain coming with Data Model changes, but they anyway let me adjust code/configuration at 3 or 4 places for every single change. To me, that's cumbersome and error prone. I made a better experience with DB4O, which simply persists Java Objects as they are, but I believe it's performance does not scale for huge applications. Is there anyway to maintain performance while letting out all the ugly configuration work? I'm seeking a performant framework which really hides persistence from my code. Wish for thinking? Or am I missing out THE technology? Hope you can help.

    Read the article

  • How to know the maximum capacity of memory/RAM of a server? [closed]

    - by Nam G. VU
    I have a Dell PowerEdge T410 and have installed 16Gb RAM for it. I don't know if I can extend it to 32Gb, 64Gb, ... RAM on this server. As the description, it seems I have no such choice. So my question is, how to know exactly the maximum capacity of memory/RAM of a server? I need this information so as to verify the number by myself. ps. My server is not exactly the same as the description on Dell website - I indeed modify the configuration to meet my budget. Currently my server is my purchased server configuration

    Read the article

  • two possible wifi devices competing, one is hard blocked - unable to connect wireless

    - by patrickmw
    blacklisted acer_wmi because that was showing up in the rfkill list then ideapad_wlan was listed $ rfkill list wifi 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 3: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes $ lshw -C network *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: f0:de:f1:12:21:e9 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.139 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:42 memory:f0400000-f043ffff ioport:2000(size=128) *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: ac:81:12:38:ba:89 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff contents of /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true I'm not sure how to disable the wifi devices independently. I'm also not sure which device is the correct one. I think its the brcmw device. Any suggestions?

    Read the article

  • Clonezilla-SE with another DHCP Server in LAN

    - by aleroot
    I want to install Clonezilla-Server(192.168.1.100) in a network that already have a DHCP Server (dd-wrt with dnsmasq - 192.168.1.1). I've installed Clonezilla-SE on ubuntu Server 10.10, once installed and configured Clonezilla Server i've removed the DHCP-Server and set pxe server address in dnsmasq configuration on DHCP Server : dhcp-boot=pxelinux.0,,192.168.1.100 When i try to start from PXE a Computer in the network clonezilla start, but give me an error that the ipddress of the machine is not given by the clonezilla server and can't continue ... Someone has already tried to configure Clonezilla-SE in a similar enviroment? Is there some configuration on DRBL server of Clonezilla that i need to do ?

    Read the article

  • WP E Commerce Safe Mode restriction error [on hold]

    - by Mustafa Kamal
    I have my online shop, created with WP Ecommerce getting broken after I moved it to another server. I could be sure that the problem comes from WP Ecommerce because when I disable that plugin. Everything run as expected. This is the exact error message Warning: session_start() [function.session-start]: SAFE MODE Restriction in effect. The script whose uid is 515 is not allowed to access /tmp owned by uid 0 in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 Fatal error: session_start() [<a href='function.session-start'>function.session-start</a>]: Failed to initialize storage module: files (path: ) in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 I've tried to turn off safe mode on my php configuration. nothing happens. the error's still there. I thought it was some kind of permission issue, so I tried to change /tmp permission to 777. Nothing happens. I googled it some more and suspect it might have something to do with fastCGI configuration and stuff. Which I totally don't understand. My googling result mostly suggest me to consult the web hosting provider or even to move to another host. But in this case, I am the owner of the server (VPS with cPanel/WHM). And I don't have any idea how to solve this kind of problem.

    Read the article

  • libc-bin errors when trying to install php

    - by jonney
    i am trying to update and install php into my ubuntu server 12.04 using the command below: apt-get upgrade php apt-get install php5-curl php5-gd php5-mysql php5-pgsql However i receive this error all the time: gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-34-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-34-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-34-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-server: linux-image-server depends on linux-image-3.2.0-33-generic; however: Package linux-image-3.2.0-33-generic is not configured yet. dpkg: error processing linux-image-server (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-server: linux-server depends on linux-image-server (= 3.2.0.33.36); however: Package linux-image-server is not configured yet. dpkg: error processing linux-server (--configure): dependency problems - leaving unconfigured Setting up libpq5 (9.1.10-0ubuntu12.04) ... No apport report written because the error message indicates it's a follow-up error from a previous failure. No apport report written because MaxReports has already been reached Setting up php5-curl (5.3.10-1ubuntu3.8) ... Setting up php5-pgsql (5.3.10-1ubuntu3.8) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-32-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-32-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports has already been reached Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: linux-image-3.2.0-33-generic linux-image-3.2.0-34-generic linux-image-server linux-server initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Not sure whats wrong and why it cant process the linux-image files?

    Read the article

  • Configuring Transmission for faster download

    - by Luis Alvarado
    I have tested on the same PC with the same torrent/magnet links the following Torrent Clients: Transmission Ktorrent Deluge qBittorrent Vuze After 7 days of testing I noticed that the only one that took longer to start downloading and to keep an optimum/max download speed was Transmission. It was the slowest of them all to download the same torrents or magnet links which I tested 8 torrents and 4 magnet links from different sites and the one that took the most to start downloading or start after a pause/resume event. The other 4 just took less than 2 seconds for example to start downloading and to download the same content between 50% less time to 80% less time. I think that Transmission has the same capabilities about downloading/resuming than the other torrent clients but it may be because of some configuration I need to do to get the same speed and effect than the others. In my tests all torrent clients were tested with their default configurations. No changes were made. They were tested on the same PC, with the same network connection in the same time periods. So I am thinking that Transmission just needs a little bit of configuration tunning. I also set the ports for use to the same one for each. Checked the router for any blocking and anything related to the network. What options can I change to make it so Transmission resumes a download faster (grabs the seeds faster) and keeps a fast download all the time (Stays with the seeds that offer the best connection for example). Both of which by the look of it are features that the rest of the torrent clients do already.

    Read the article

  • Ethernet switch not working

    - by Froskoy
    I've just tried using two different ethernet switches on my network to replace an 8-port Netgear gigabit ethernet switch, which works fine, but doesn't have enough ports for what I need. Computers are connected to a TP-Link TD-8840T router via a switch. They use DHCP for IP address assignment. One switch is a TigerSwitch 6924M, which I'd expect to be difficult to set up, since it is second hand and has an advanced configuration menu, which I can't access without a serial port. However, the second switch that I tried is a new TP-Link TL-SF024, which doesn't appear to have any configuration options, so that can't be the problem. When I say "not working," I mean that although they display that they are connected to a network, they cannot access the internet. For example commands like "ping -c10 google.co.uk" come up with 100% packet loss. What could be causing the problem and how do I fix it?

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • trying to install lync 2010 and experiecing an error with central management store

    - by Itai Ganot
    I'm trying to install Lync 2010 and i'm getting stuck in the stage where i have to install or point to the local configuration store. I've tried finding it in the domain and without luck, any recommendations? PS C:\Users\Administrator.ASUTA> Get-csconfigurationstorelocation WARNING: No Configuration Store location has been set. PS C:\Users\Administrator.ASUTA> Get-CsComputer "$env:computername.$env:userdnsd omain" Get-CsComputer : Cannot find location of Central Management Store in Active Dir ectory. At line:1 char:15 + Get-CsComputer <<<< "$env:computername.$env:userdnsdomain" + CategoryInfo : ResourceUnavailable: (:) [Get-CsComputer], Manag ementStoreNotFoundException + FullyQualifiedErrorId : ManagementStoreNotFound,Microsoft.Rtc.Management .Xds.GetComputerCmdlet PS C:\Users\Administrator.ASUTA>

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • How do I learn IPSec VPN implementation on FreeBSD from pfSense

    - by Lang Hai
    I've been trying to figure out a complete working solution for IPSec VPN implementation on FreeBSD but with no luck till now. pfSense seems did a fantastic job on supporting IPSec and even for mobile clients, so I downloaded and installed pfSense hoping to figure out how it works, or at least see some configuration examples, but I couldn't find anything interesting maybe because I'm not familiar with pfSense, so I'd like to ask for help. How pfSense implements IPSec, what tools are used? Where does pfSense store all its configuration files? And since pfSense has its own kernel mods and acts as a different OS, there's no way for us to install it on top of an existing FreeBSD box, and plus that it is such a great project combining those fantastic features, so my question can kinda be extended as: How do we learn from pfSense, and implement its features on top of a regular FreeBSD server?

    Read the article

  • DVD ROM is not working

    - by Cyril N.
    (note: I don't know in which StackExchange site to put this question, I'll thank the moderator that will move it to a more appropriate place, if there is a S.E. available for my question). I have a DVD RW drive that is well listed in the bios, and if no CD is in, it is also present in the "My Computer" of my Fedora 16. But when I put a disc on it, the icon disapear from "My Computer", and I can not do anything with this ! (Like erasing a RW disc). I'd like to boot a Fedora 17 Live CD image. I burned it on an other computer but when I try to run it in bios, nothing is done and I'm redirected to Grub of my HD. The command cdrecord -scanbus shows this : wodim: Warning: controller returns wrong size for CD capabilities page. wodim: Cannot get CD capabilities data. 6,1,0 601) 'HD-DT%ST' 'DVD%RAM G@22NP20' '1&04' Removable CD-ROM And when I try to mount manually the disc, I got this error : mount: block device /dev/sr0 is write-protected, mounting read-only mount: /dev/sr0: can't read superblock Here's a paste of dmesg | grep sr0 : [ 5.161265] sr0: scsi-1 drive [ 5.161621] sr 6:0:1:0: Attached scsi CD-ROM sr0 [ 834.545978] sr0: Hmm, seems the drive doesn't support multisession CD's [ 841.731194] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 842.021640] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 842.021652] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 842.021662] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 842.021672] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 [ 842.021688] end_request: I/O error, dev sr0, sector 0 [ 842.021697] Buffer I/O error on device sr0, logical block 0 [ 842.023715] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.048203] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.048211] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.048219] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.048234] end_request: I/O error, dev sr0, sector 0 [ 843.048274] EXT4-fs (sr0): unable to read superblock [ 843.063155] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.075904] sr0: CDROM (ioctl) error, command: Get configuration 46 00 00 00 00 00 00 00 20 00 [ 843.220512] sr 6:0:1:0: [sr0] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [ 843.220522] sr 6:0:1:0: [sr0] Sense Key : Aborted Command [current] [ 843.220530] sr 6:0:1:0: [sr0] Add. Sense: No additional sense information [ 843.220538] sr 6:0:1:0: [sr0] CDB: Read(10): 28 00 00 00 00 00 00 00 01 00 [ 843.220553] end_request: I/O error, dev sr0, sector 0 [ 843.220609] FAT-fs (sr0): unable to read boot sector The lines from Sense Key .. (line 6) to DRIVER_SENSE (line 11) are repeating a lot. I then changed my DVD player with an other spare one I had, and the disc didn't boot neither. I then changed the IDE cable, but still no success. What can I do to make it work? Thanks for your help.

    Read the article

  • Issue with Reporting Services - rsreportserver.config is denied

    - by Gabe
    Error message is 'd:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportServer\RSReportServer.config' is denied.' I open SSRS configuration and I do see the service as started and running. I gave Network Services the necessary permissions in the reportserver folder and config file. SQL Server and SSRS are using LocalSystem as the 'Log On As' in Sql server configuration manager. Most of my sites I googled seem only relevant when it's Network Service running. I changed it to log on as NT Autority\NetworkService but still had the same issue. In SSRS Config Manager, I see that Web Service Identity is not configured (red), though it is showing as readonly domain\aspnet. Windows Server 2003 R2 and SSRS 2005

    Read the article

  • Oracle Global HR Cloud Implementation Training Can Help Meet Your Business Needs

    - by HCM-Oracle
    By Jim Vonick A key goal for the deployment of your Oracle Global HR Cloud applications is to accelerate the implementation and adoption of your applications, so that your business can start realizing all of the benefits that this rich solution offers.    Implementation team members need to have the skills and knowledge to ensure a smooth, rapid and successful implementation of your applications. During set-up, you want to optimize the configuration to best meet your business needs. In order to do this you need to understand the foundation and configuration options of your applications, so that decisions can be made during set-up that best align with your business.  To that end product level implementation training is recommended for Oracle Global HR Cloud deployments. Training For Implementation Team Members and Consultants Fusion Applications: HCM Security: Learn how to implement security for Oracle Fusion HCM applications by creating and customizing roles. You'll learn how to create security profiles to restrict data access, provision roles to users, create and manage user accounts, and verify security setup. Fusion Applications: HCM Global Human Resources: Learn how to set up your enterprise and workforce structures, how to perform functional tasks, and how to configure security for Global Human Resources data. Fusion Applications: HCM Compensation: Learn how to implement, configure, and use Oracle Fusion Compensation to manage base pay, individual compensation, workforce compensation, and total compensation statements. Fusion Applications: HCM Benefits: This course teaches you to implement, configure and manage Oracle Fusion Benefits, including how to implement benefit plans and programs.  Fusion Applications: HCM Payroll Implementation (US): This course provides implementation training for payroll managers or payroll administrators. Learn how to process payroll to ensure accurate setup results.  Learn More: See all Fusion HCM Training Jim Vonick is a Senior Product Manager with Oracle University focusing on training for Oracle Applications and Industry Solutions.

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >