Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 396/2058 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • squid3 auth thru samba using ntlm to AD doesn't work

    - by derty
    some users here are spending to much time exploring the WWW. So big boss whats to get this under control. We use a squid3 just for some security reason and chace benefits. and now i'm trying to set up a new proxy on a different server (Debian 6) Permissions are defined in AC and the squid3 should get the auth thru samba/winbind by using the ntlm protocol. but i'll get all the time Access, denited. it only works by using LDAP but thats not the way i need it. here some log and confs squid access.log 1326878095.784 1 192.168.15.27 TCP_DENIED/407 4049 GET http://at.msn.com/? -NONE/- text/html 1326878095.791 1 192.168.15.27 TCP_DENIED/407 4294 GET http://at.msn.com/? - NONE/- text/html 1326878095.803 9 192.168.15.27 TCP_DENIED/403 4028 GET http://at.msn.com/? kavan NONE/- text/html 1326878095.848 0 192.168.15.27 TCP_DENIED/403 3881 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878100.279 0 192.168.15.27 TCP_DENIED/403 3735 GET http://www.google.at/ kavan NONE/- text/html 1326878100.296 0 192.168.15.27 TCP_DENIED/403 3870 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878155.700 0 192.168.15.27 TCP_DENIED/407 4072 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.705 2 192.168.15.27 TCP_DENIED/407 4317 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.709 3 192.168.15.27 TCP_DENIED/403 4026 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml kavan NONE/- text/html squid chace 2012/01/18 10:12:49| Creating Swap Directories 2012/01/18 10:12:49| Starting Squid Cache version 3.1.6 for x86_64-pc-linux-gnu... 2012/01/18 10:12:49| Process ID 17236 2012/01/18 10:12:49| With 65535 file descriptors available 2012/01/18 10:12:49| Initializing IP Cache... 2012/01/18 10:12:49| DNS Socket created at [::], FD 7 2012/01/18 10:12:49| DNS Socket created at 0.0.0.0, FD 8 2012/01/18 10:12:49| Adding nameserver 192.168.15.2 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.19 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.1 from /etc/resolv.conf 2012/01/18 10:12:49| Adding domain schoenbrunn.local from /etc/resolv.conf 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'ntlm_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'squid_kerb_auth' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_group' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| Unlinkd pipe opened on FD 73 2012/01/18 10:12:49| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2012/01/18 10:12:49| Store logging disabled 2012/01/18 10:12:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2012/01/18 10:12:49| Target number of buckets: 1008 2012/01/18 10:12:49| Using 8192 Store buckets 2012/01/18 10:12:49| Max Mem size: 262144 KB 2012/01/18 10:12:49| Max Swap size: 0 KB 2012/01/18 10:12:49| Using Least Load store dir selection 2012/01/18 10:12:49| Set Current Directory to /var/spool/squid3 2012/01/18 10:12:49| Loaded Icons. 2012/01/18 10:12:49| Accepting HTTP connections at [::]:3128, FD 74. 2012/01/18 10:12:49| HTCP Disabled. 2012/01/18 10:12:49| Squid modules loaded: 0 2012/01/18 10:12:49| Adaptation support is off. 2012/01/18 10:12:49| Ready to serve requests. 2012/01/18 10:12:50| storeLateRelease: released 0 objects smb.conf # Domain Authntication Settings workgroup = <WORKGROUP> security = ads password server = <DOMAINNAME>.LOCAL realm = <DOMAINNAME>.LOCAL ldap ssl = no # logging log level = 5 max log size = 50 # logs split per machine log file = /var/log/samba/%m.log # max 50KB per log file, then rotate ; max log size = 50 # User settings username map = /etc/samba/smbusers idmap uid = 10000-20000000 idmap gid = 10000-20000000 idmap backend = ad ; template primary group = <ad group> template shell = /sbin/nologin # Winbind Settings winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind netsted groups = Yes winbind nested groups = Yes winbind cache time = 10 winbind use default domain = Yes #Other Globals unix charset = LOCALE server string = <SERVERNAME> load printers = no printing = cups cups options = raw ; printcap name = /etc/printcap #obtain list of printers automatically on SystemV ; printcap name = lpstat ; printing = cups squid.conf auth_param ntlm program /usr/bin/ntlm_auth --require-membership-of=<DOMAINNAME>\\INTERNETZ --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b "dc=<dcname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f sAMAccountName=%s -h 192.168.15.19:3268 auth_param basic realm "Proxy Authentifizierung. Bitte geben Sie Ihren Benutzername und Ihr Passwort ein!" #means insert you PW in an other language - # external_acl_type InetGroup %LOGIN /usr/lib/squid3/squid_ldap_group -R -b "dc=<domainname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f "(&(objectclass=person)(sAMAccountName=%v) (memberof=cn=%a,cn=internetz,dc=<domainname>,dc=local))" -h 192.168.15.19:3268 auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d auth_param negotiate children 10 auth_param negotiate keep_alive on acl localnet proxy_auth REQUIRED acl InetAccess external InetGroup Internetz http_access allow InetAccess http_access deny all acl auth proxy_auth REQUIRED http_access allow auth and a very suspicious is that by adding the proxy server to the Domain i see 2 new entries in the PC one with the original computer-name leopoldine and one with leopoldine CNF:f8efa4c4-ff0e-4217-939d-f1523b43464d ?!? I tried a lot, really... but i stuck on this problem... i actually i even reinstalled all dependent programs and reconfigured them from default. Group exists and has me in it. Firefox running on the old proxy and i use IE for testing the new one. But i'll get all the time Access-Denited and to be honest i'm quite a beginner, so please don't be to prude. I'll interested in improving, i'll get the information we need to fix this but i started working 2 month ago and got only 1 1/2 year's training and not a single sec. in linux ;)

    Read the article

  • Blank Screen at boot Ubuntu 12.04 - nvidia-current - Macbook Air 3,2

    - by soulnafein
    I've installed nvidia-current using the Additional Drivers application in Ubuntu 12.04. I need those drivers so I can use accelerated WebGL. After installing the drivers, and rebooting X fails to start and I have a frozen system/dark screen. Below is the content of Xorg.0.log How can I fix this problem? [ 4.666] X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 4.666] X Protocol Version 11, Revision 0 [ 4.666] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 4.666] Current Operating System: Linux david-macbook-air 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 4.666] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=b3d5ae2a-72af-4ef9-b775-0d40b5f80f9b ro quiet splash vt.handoff=7 [ 4.666] Build Date: 29 August 2012 12:12:33AM [ 4.666] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 4.666] Current version of pixman: 0.24.4 [ 4.666] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 4.666] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 4.666] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Dec 13 10:18:02 2012 [ 4.668] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 4.668] (==) No Layout section. Using the first Screen section. [ 4.668] (==) No screen section available. Using defaults. [ 4.668] (**) |-->Screen "Default Screen Section" (0) [ 4.668] (**) | |-->Monitor "<default monitor>" [ 4.668] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 4.668] (==) Automatically adding devices [ 4.668] (==) Automatically enabling devices [ 4.668] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 4.668] Entry deleted from font path. [ 4.668] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 4.668] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 4.669] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 4.669] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 4.669] (II) Loader magic: 0x7f6222467b00 [ 4.669] (II) Module ABI versions: [ 4.669] X.Org ANSI C Emulation: 0.4 [ 4.669] X.Org Video Driver: 11.0 [ 4.669] X.Org XInput driver : 16.0 [ 4.669] X.Org Server Extension : 6.0 [ 4.670] (--) PCI:*(0:2:0:0) 10de:08a3:106b:00d3 rev 162, Mem @ 0x92000000/16777216, 0x80000000/268435456, 0x90000000/33554432, I/O @ 0x00001000/128, BIOS @ 0x????????/131072 [ 4.670] (II) Open ACPI successful (/var/run/acpid.socket) [ 4.670] (II) LoadModule: "extmod" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 4.671] (II) Module extmod: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension MIT-SCREEN-SAVER [ 4.671] (II) Loading extension XFree86-VidModeExtension [ 4.671] (II) Loading extension XFree86-DGA [ 4.671] (II) Loading extension DPMS [ 4.671] (II) Loading extension XVideo [ 4.671] (II) Loading extension XVideo-MotionCompensation [ 4.671] (II) Loading extension X-Resource [ 4.671] (II) LoadModule: "dbe" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 4.671] (II) Module dbe: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension DOUBLE-BUFFER [ 4.671] (II) LoadModule: "glx" [ 4.671] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 4.869] (II) Module glx: vendor="NVIDIA Corporation" [ 4.869] compiled for 4.0.2, module version = 1.0.0 [ 4.869] Module class: X.Org Server Extension [ 4.869] (II) NVIDIA GLX Module 295.40 Thu Apr 5 21:57:38 PDT 2012 [ 4.869] (II) Loading extension GLX [ 4.869] (II) LoadModule: "record" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 4.870] (II) Module record: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.13.0 [ 4.870] Module class: X.Org Server Extension [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension RECORD [ 4.870] (II) LoadModule: "dri" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 4.870] (II) Module dri: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.0.0 [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension XFree86-DRI [ 4.870] (II) LoadModule: "dri2" [ 4.871] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 4.871] (II) Module dri2: vendor="X.Org Foundation" [ 4.871] compiled for 1.11.3, module version = 1.2.0 [ 4.871] ABI class: X.Org Server Extension, version 6.0 [ 4.871] (II) Loading extension DRI2 [ 4.871] (==) Matched nvidia as autoconfigured driver 0 [ 4.871] (==) Matched nouveau as autoconfigured driver 1 [ 4.871] (==) Matched nv as autoconfigured driver 2 [ 4.871] (==) Matched vesa as autoconfigured driver 3 [ 4.871] (==) Matched fbdev as autoconfigured driver 4 [ 4.871] (==) Assigned the driver to the xf86ConfigLayout [ 4.871] (II) LoadModule: "nvidia" [ 4.871] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.887] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.887] compiled for 4.0.2, module version = 1.0.0 [ 4.887] Module class: X.Org Video Driver [ 4.892] (II) LoadModule: "nouveau" [ 4.894] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.894] (II) Module nouveau: vendor="X.Org Foundation" [ 4.894] compiled for 1.11.3, module version = 1.0.2 [ 4.894] Module class: X.Org Video Driver [ 4.894] ABI class: X.Org Video Driver, version 11.0 [ 4.894] (II) LoadModule: "nv" [ 4.895] (WW) Warning, couldn't open module nv [ 4.895] (II) UnloadModule: "nv" [ 4.895] (II) Unloading nv [ 4.895] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.895] (II) LoadModule: "vesa" [ 4.895] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.896] (II) Module vesa: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 2.3.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (II) LoadModule: "fbdev" [ 4.896] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.896] (II) Module fbdev: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 0.4.2 [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (==) Matched nvidia as autoconfigured driver 0 [ 4.896] (==) Matched nouveau as autoconfigured driver 1 [ 4.896] (==) Matched nv as autoconfigured driver 2 [ 4.896] (==) Matched vesa as autoconfigured driver 3 [ 4.896] (==) Matched fbdev as autoconfigured driver 4 [ 4.896] (==) Assigned the driver to the xf86ConfigLayout [ 4.896] (II) LoadModule: "nvidia" [ 4.896] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.896] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.896] compiled for 4.0.2, module version = 1.0.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] (II) UnloadModule: "nvidia" [ 4.896] (II) Unloading nvidia [ 4.896] (II) Failed to load module "nvidia" (already loaded, 32610) [ 4.896] (II) LoadModule: "nouveau" [ 4.897] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.897] (II) Module nouveau: vendor="X.Org Foundation" [ 4.897] compiled for 1.11.3, module version = 1.0.2 [ 4.897] Module class: X.Org Video Driver [ 4.897] ABI class: X.Org Video Driver, version 11.0 [ 4.897] (II) UnloadModule: "nouveau" [ 4.897] (II) Unloading nouveau [ 4.897] (II) Failed to load module "nouveau" (already loaded, 32610) [ 4.897] (II) LoadModule: "nv" [ 4.897] (WW) Warning, couldn't open module nv [ 4.897] (II) UnloadModule: "nv" [ 4.897] (II) Unloading nv [ 4.897] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.897] (II) LoadModule: "vesa" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.898] (II) Module vesa: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 2.3.0 [ 4.898] Module class: X.Org Video Driver [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "vesa" [ 4.898] (II) Unloading vesa [ 4.898] (II) Failed to load module "vesa" (already loaded, 0) [ 4.898] (II) LoadModule: "fbdev" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.898] (II) Module fbdev: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 0.4.2 [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "fbdev" [ 4.898] (II) Unloading fbdev [ 4.899] (II) Failed to load module "fbdev" (already loaded, 0) [ 4.899] (II) NVIDIA dlloader X Driver 295.40 Thu Apr 5 21:38:35 PDT 2012 [ 4.899] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 4.899] (II) NOUVEAU driver Date: Wed Sep 12 13:42:43 2012 +0200 [ 4.899] (II) NOUVEAU driver for NVIDIA chipset families : [ 4.899] RIVA TNT (NV04) [ 4.899] RIVA TNT2 (NV05) [ 4.899] GeForce 256 (NV10) [ 4.899] GeForce 2 (NV11, NV15) [ 4.899] GeForce 4MX (NV17, NV18) [ 4.899] GeForce 3 (NV20) [ 4.900] GeForce 4Ti (NV25, NV28) [ 4.900] GeForce FX (NV3x) [ 4.900] GeForce 6 (NV4x) [ 4.900] GeForce 7 (G7x) [ 4.900] GeForce 8 (G8x) [ 4.900] GeForce GTX 200 (NVA0) [ 4.900] GeForce GTX 400 (NVC0) [ 4.900] (II) VESA: driver for VESA chipsets: vesa [ 4.900] (II) FBDEV: driver for framebuffer: fbdev [ 4.900] (++) using VT number 7 [ 4.902] (II) Loading sub module "fb" [ 4.902] (II) LoadModule: "fb" [ 4.902] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.902] (II) Module fb: vendor="X.Org Foundation" [ 4.902] compiled for 1.11.3, module version = 1.0.0 [ 4.902] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.902] (II) Loading sub module "wfb" [ 4.902] (II) LoadModule: "wfb" [ 4.903] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.905] (II) Module wfb: vendor="X.Org Foundation" [ 4.905] compiled for 1.11.3, module version = 1.0.0 [ 4.905] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.905] (II) Loading sub module "ramdac" [ 4.905] (II) LoadModule: "ramdac" [ 4.905] (II) Module "ramdac" already built-in [ 4.907] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.912] (WW) Falling back to old probe method for vesa [ 4.912] (WW) Falling back to old probe method for fbdev [ 4.912] (II) Loading sub module "fbdevhw" [ 4.912] (II) LoadModule: "fbdevhw" [ 4.912] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 4.912] (II) Module fbdevhw: vendor="X.Org Foundation" [ 4.912] compiled for 1.11.3, module version = 0.0.2 [ 4.912] ABI class: X.Org Video Driver, version 11.0 [ 4.912] (II) NVIDIA(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 4.912] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 [ 4.912] (==) NVIDIA(0): RGB weight 888 [ 4.912] (==) NVIDIA(0): Default visual is TrueColor [ 4.912] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 4.912] (**) NVIDIA(0): Enabling 2D acceleration [ 5.442] (EE) NVIDIA(0): Failed to initialize the display subsystem for the NVIDIA [ 5.442] (EE) NVIDIA(0): graphics device! [ 5.442] (EE) NVIDIA(0): Failed to get supported display device(s) [ 5.442] (EE) NVIDIA(0): Failed to initialize dac HAL [ 5.442] (II) UnloadModule: "nvidia" [ 5.442] (II) Unloading nvidia [ 5.442] (II) UnloadModule: "wfb" [ 5.442] (II) Unloading wfb [ 5.442] (II) UnloadModule: "fb" [ 5.443] (II) Unloading fb [ 5.443] (EE) Screen(s) found, but none have a usable configuration. [ 5.443] Fatal server error: [ 5.443] no screens found [ 5.443] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 5.443] Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 5.443] [ 5.447] ddxSigGiveUp: Closing log [ 5.447] Server terminated with error (1). Closing log file.

    Read the article

  • Can't Install php5-msql

    - by user210445
    Hello friends I'm finishing the process of installing Apache/Php/mysql installations but this shows up: # sudo apt-get install mysql-server php5-msql Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php5-msql After some adjustments this happened: angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • CodePlex Daily Summary for Monday, November 11, 2013

    CodePlex Daily Summary for Monday, November 11, 2013Popular ReleasesULS Log Viewer Feature: ULS Log Viewer Feature: What's getting installed? - The solution include Custom Action (New action in Site Menu). Features: - Only Site Administrators can see Custom Action in site menu. - Multi-Language SUPPORT! (English, Hebrew). - Works against Secure Store Service to use administrator rights for getting access to ULS Logs Path on farm servers (Use the guide for creating new SSA). - Load only the last log file from ULS log path. - Gets automatically the ULS log path from Central Administration. - Choose from whi...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.3.3: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Magick.NET: Magick.NET 6.8.7.501: Magick.NET linked with ImageMagick 6.8.7.5. Breaking changes: - Refactored MagickImageStatistics to prepare for upcoming changes in ImageMagick 7. - Renamed MagickImage.SetOption to SetDefine.Media Companion: Media Companion MC3.587b: Fixed* TV - Locked shows display correctly after refresh * TV - missing episodes display in correct colour for missed or to be aired * TV - Rescrape of Multi-episodes working. * TV - Cache fix where was writing episodes multiple times * TV - Fixed Cache writing missing episodes when Display missing eps was disabled. Revision HistoryGenerate report of user mailbox size for Exchange 2010: Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Generate-report-of-user-e4e9afcaCheck SQL Server a specified database index fragmentation percentage (SQL): Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Check-SQL-Server-a-a5758043Save attachments from multiple selected items in Outlook (VBA): Script Download: Script Download: http://gallery.technet.microsoft.com/scriptcenter/Save-attachments-from-5b6bf54bRemove Windows Store apps in Windows 8: Script Download: Script Download http://gallery.technet.microsoft.com/scriptcenter/Remove-Windows-Store-Apps-a00ef4a4PCSX-Reloaded: 1.9.94: General changes:Support for compressed audio in cue files. ECM support. OS X changes:32-bit support has been dropped Partial French and Hungarian translationsDynamics AX 2012 R2 Kitting: AX 2012 R2 CU7 release of Kitting: Here is the AX 2012 R2 CU7 release of kitting. Released both as a XPO and a model.VidCoder: 1.5.12 Beta: Added an option to preserve Created and Last Modified times when converting files. In Options -> Advanced. Added an option to mark an automatically selected subtitle track as "Default". Updated HandBrake core to SVN 5878. Fixed auto passthrough not applying just after switching to it. Fixed bug where preset/profile/tune could disappear when reverting a preset.Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2013.9.25): XrmToolbox improvement Correct changing connection from the status dropdown Tools improvement Updated tool Audit Center (v1.2013.9.10) -> Publish entities Iconator (v1.2013.9.27) -> Optimized asynchronous loading of images and entities MetadataDocumentGenerator (v1.2013.11.6) -> Correct system entities reading with incorrect attribute type Script Manager (v1.2013.9.27) -> Retrieve only custom events SiteMapEditor (v1.2013.11.7) -> Reset of CRM 2013 SiteMap ViewLayoutReplicator (v1.201...Microsoft SQL Server Product Samples: Database: SQL Server 2014 CTP2 In-Memory OLTP Sample, based: This sample showcases the new In-Memory OLTP feature, which is part of SQL Server 2014 CTP2. It shows the new memory-optimized tables and natively-compiled stored procedures, and can be used to show the performance benefit of in-memory OLTP. Installation instructions for the sample are included in the file ‘awinmemsample.doc’, which is part of the download. You can ask a question about this sample at the SQL Server Samples Forum Composite C1 CMS - Open Source on .NET: Composite C1 4.1: Composite C1 4.1 (4.1.5058.34326) Write a review for this release - help us improve, recommend us. Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.1 The following are highlights of major changes since Composite C1 4.0: General user features: Drag-and-drop images and files like PDF and Word directly from own your desktop and folders into page content Allow you to install Composite Form Builder ...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.9.0: Implemented Recent Scripts list Added checking for plugin updates from AboutBox Multiple formatting improvements/fixes Implemented selection of the CLR version when preparing distribution package Added project panel button for showing plugin shortcuts list Added 'What's New?' panel Fixed auto-formatting scrolling artifact Implemented navigation to "logical" file (vs. auto-generated) file from output panel To avoid the DLLs getting locked by OS use MSI file for the installation.Resources Editor: Resources Editor: Stable releaseWPF Extended DataGrid: WPF Extended DataGrid 2.0.0.10 binaries: Now row summaries are updated whenever autofilter value sis modified.Social Network Importer for NodeXL: SocialNetImporter(v.1.9.1): This new version includes: - Include me option is back - Fixed the login bug reported latelyVeraCrypt: VeraCrypt version 1.0c: Changes between 1.0b and 1.0c (11 November 2013) : Set correctly the minimum required version in volumes header (this value must always follow the program version after any major changes). This also solves also the hidden volume issueCaptcha MVC: Captcha MVC 2.5: v 2.5: Added support for MVC 5. The DefaultCaptchaManager is no longer throws an error if the captcha values was entered incorrectly. Minor changes. v 2.4.1: Fixed issues with deleting incorrect values of the captcha token in the SessionStorageProvider. This could lead to a situation when the captcha was not working with the SessionStorageProvider. Minor changes. v 2.4: Changed the IIntelligencePolicy interface, added ICaptchaManager as parameter for all methods. Improved font size ...New ProjectsASP.NET Web API: ASP.NET Web API is a framework that makes it easy to build HTTP services that reach a broad range of clients, including browsers and mobile devices. ASP.NET WebDungeonPaper: DungeonPaper will be a platform for RPG players to utilize computers to replace traditional pen and paper in their campaigns.EMS: demoEvent Dispatcher: Event Dispatcher is a powershell module for centralized logging. Supports filtering events, and outputs to flat files or the Windows event log.Koopakiller.Numerics: A growing Math-Library for .NET, provided as a portable class library.NetroTrix: NetroTrix is a LAN Messenger Project.....Sharp Explorer: Sort of an alpha Play with C# to make a treeView Windows explorer. Currently multi-threaded dbl click to explore, r-click to preview some text files.TI Sensor Tag Library: A library to access the Texas Instruments Sensor Tag in Windows Store apps with Bluetooth GATT. Written in C#.ULS Log Viewer Feature: "ULS Log Viewer Feature" used by SharePoint Administrators, Developers and SharePoint Site Managers for get easily the real error behind the Correlation ID.Visual Studio Code Line Counter: VS Code Line Counter parses a Visual Studio solution file (.sln), reads each included project, and counts the number of lines in each included file.Vulcanus: TODOWax - The WiX Setup Project Editor: An interactive editor for WiX setup projects.

    Read the article

  • How to troubleshoot errors with TeamCity

    - by Tomas Lycken
    I'm following this guide to set up a small environment for source control and automated builds - mostly for learning what it is and how it works, but also for using in those of my hobby projects that I believe will actually be useful some day. However, at the step where he commits and builds, I fail to get a success status in the TeamCity history log. I keep getting the error described in the stack trace below. I have verified with Windows Explorer that the solution file it can't find is actually there, so I really don't know what to do. How do I fix/troubleshoot this? [15:16:06]: Checking for changes [15:16:08]: Clearing temporary directory: C:\Program Files\JetBrains\BuildAgent\temp\buildTmp [15:16:08]: Checkout directory: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:08]: Updating sources: server side checkout... [15:16:08]: [Updating sources: server side checkout...] Building incremental patch for VCS root: DemoProjects [15:16:09]: [Updating sources: server side checkout...] Repository sources transferred [15:16:09]: [Updating sources: server side checkout...] Updating C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:10]: Start process: "c:\Program Files\JetBrains\BuildAgent\bin\..\plugins\dotnetPlugin\bin\JetBrains.BuildServer.MsBuildBootstrap.exe" "/workdir:C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588" /msbuildPath:C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe [15:16:10]: in: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:11]: TeamCity MSBuild bootstrap v5.1 Copyright (C) JetBrains s.r.o. [15:16:11]: Application failed with internal error: [15:16:11]: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: System.Exception: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Impl.MSBuildBootstrapFactory.Create(IClientRunArgs args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap.Core\src\Impl\MSBuildBootstrapFactory.cs:line 25 [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Program.Run(String[] _args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap\src\Program.cs:line 66 [15:16:11]: Process exited with code -11 [15:16:11]: Build finished

    Read the article

  • fatal error C1075: end of file found before the left brace and read and write files not working

    - by user320950
    could someone also tell me if the actions i am attempting to do which are stated in the comments are correct or not. i am new at c++ and i think its correct but i have doubts #include<iostream> #include<fstream> #include<cstdlib> #include<iomanip> using namespace std; int main() { ifstream in_stream; // reads itemlist.txt ofstream out_stream1; // writes in items.txt ifstream in_stream2; // reads pricelist.txt ofstream out_stream3;// writes in plist.txt ifstream in_stream4;// read recipt.txt ofstream out_stream5;// write display.txt float price=' ',curr_total=0.0; int wrong=0; int itemnum=' '; char next; in_stream.open("ITEMLIST.txt", ios::in); // list of avaliable items if( in_stream.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here0, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n" << endl; exit(1); } else{ cout << " System ran correctly " << endl; out_stream1.open("listWititems.txt", ios::out); // list of avaliable items if(out_stream1.fail() )// check to see if itemlist.txt is open { wrong++; cout << " the error occured here1, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit(1); } else{ cout << " System ran correctly " << endl; } in_stream2.open("PRICELIST.txt", ios::in); if( in_stream2.fail() ) { wrong++; cout << " the error occured here2, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream3.open("listWitdollars.txt", ios::out); if(out_stream3.fail() ) { wrong++; cout << " the error occured here3, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream4.open("display.txt", ios::in); if( in_stream4.fail() ) { wrong++; cout << " the error occured here4, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } out_stream5.open("showitems.txt", ios::out); if( out_stream5.fail() ) { wrong++; cout << " the error occured here5, you have " << wrong++ << " errors" << endl; cout << "Error opening the file\n"; exit (1); } else{ cout << " System ran correctly " << endl; } in_stream.close(); // closing files. out_stream1.close(); in_stream2.close(); out_stream3.close(); in_stream4.close(); out_stream5.close(); system("pause"); in_stream.setf(ios::fixed); while(in_stream.eof()) { in_stream >> itemnum; cin.clear(); cin >> next; } out_stream1.setf(ios::fixed); while (out_stream1.eof()) { out_stream1 << itemnum; cin.clear(); cin >> next; } in_stream2.setf(ios::fixed); in_stream2.setf(ios::showpoint); in_stream2.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (in_stream2 >> itemnum >> price) // gets itemnum and price { while (in_stream2.eof()) // reads file to end of file { in_stream2 >> itemnum; in_stream2 >> price; price++; curr_total= price++; in_stream2 >> curr_total; cin.clear(); // allows more reading cin >> next; } } } out_stream3.setf(ios::fixed); out_stream3.setf(ios::showpoint); out_stream3.precision(2); while((price== (price*1.00)) && (itemnum == (itemnum*1))) { while (out_stream3 << itemnum << price) { while (out_stream3.eof()) // reads file to end of file { out_stream3 << itemnum; out_stream3 << price; price++; curr_total= price++; out_stream3 << curr_total; cin.clear(); // allows more reading cin >> next; } return itemnum, price; } } in_stream4.setf(ios::fixed); in_stream4.setf(ios::showpoint); in_stream4.precision(2); while ( in_stream4.eof()) { in_stream4 >> itemnum >> price >> curr_total; cin.clear(); cin >> next; } out_stream5.setf(ios::fixed); out_stream5.setf(ios::showpoint); out_stream5.precision(2); out_stream5 <<setw(5)<< " itemnum " <<setw(5)<<" price "<<setw(5)<<" curr_total " <<endl; // sends items and prices to receipt.txt out_stream5 << setw(5) << itemnum << setw(5) <<price << setw(5)<< curr_total; // sends items and prices to receipt.txt out_stream5 << " You have a total of " << wrong++ << " errors " << endl; }

    Read the article

  • Problem Linking Boost Filesystem Library in Microsoft Visual C++

    - by Scott
    Hello. I am having trouble getting my project to link to the Boost (version 1.37.0) Filesystem lib file in Microsoft Visual C++ 2008 Express Edition. The Filesystem library is not a header-only library. I have been following the Getting Started on Windows guide posted on the official boost web page. Here are the steps I have taken: I used bjam to build the complete set of lib files using: bjam --build-dir="C:\Program Files\boost\build-boost" --toolset=msvc --build-type=complete I copied the /libs directory (located in C:\Program Files\boost\build-boost\boost\bin.v2) to C:\Program Files\boost\boost_1_37_0\libs. In Visual C++, under Project Properties Additional Library Directories I added these paths: C:\Program Files\boost\boost_1_37_0\libs C:\Program Files\boost\boost_1_37_0\libs\filesystem\build\msvc-9.0express\debug\link-static\threading-multi I added the second one out of desperation. It is the exact directory where libboost_system-vc90-mt-gd-1_37.lib resides. In Configuration Properties C/C++ General Additional Include Directories I added the following path: C:\Program Files\boost\boost_1_37_0 Then, to put the icing on the cake, under Tools Options VC++ Directories Library files, I added the same directories mentioned in step 3. Despite all this, when I build my project I get the following error: fatal error LNK1104: cannot open file 'libboost_system-vc90-mt-gd-1_37.lib' Additionally, here is the code that I am attempting to compile as well as a screen shot of the aformentioned directory where the (assumedly correct) lib file resides: #include "boost/filesystem.hpp" // includes all needed Boost.Filesystem declarations #include <iostream> // for std::cout using boost::filesystem; // for ease of tutorial presentation; // a namespace alias is preferred practice in real code using namespace std; int main() { cout << "Hello, world!" << endl; return 0; } Can anyone help me out? Let me know if you need to know anything else. As always, thanks in advance.

    Read the article

  • VS2010: "Select Resource" dialog & resx location

    - by Dav
    Got two issues with the VS2010 / VS2008 select resource dialog - the one that appears when you want to add an image to a button in a WinForms app for example. Give me my files back! It only seems to see the default project resources file (Properties\Resources.resx), and resx files in project root (say MyProject\famfamfam.resx). We have quite a few icons all over the app, and because some of them come from different icon sets (like famfamfam), and some are related to this project only we'd like to keep them separate. For that same reason (keeping solution neat & tidy) we want to store these extra resource files in the Resources folder (eg. Resources\famfamfam.resx). However, we'd also like to keep using the Select Resource dialog :-) Because it does not see the 'extra' resource files, we're having to select a 'fake' icon now (from the global Resources.resx file) and then manually change that to reference the right icon in .Designer.cs. As you can imagine, this is a pain. Stop modifying my files! Second issue is a bit more annoying. We use the excellent MultiLang add-in for Visual Studio to globalize our app. It stores its translations in MultiLang.resx & MultiLang.XY.resx files in the project root, where XY is a language code, eg. .cs.resx for Czech. These have to be set to No code generation access modifier. What Select Resource seems to be doing is set all .resx files it can find to Internal. Exec summary Is there a way to convince Select Resource dialog to look for extra .resx files anywhere besides the project root? Is there any way to stop it from modifying the access modifier of the resources it does see (other than file a bug with MS)? Thanks in advance for any suggestions!

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • Using themed css files requires a header control on the page. (e.g. <head runat="server" />).

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378 -------------------------------------album1.aspx------------------ <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="album1.aspx.cs" Inherits="album1" Title="Özkan Köylü | Albüm" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <script src="scripts/Silverlight.js" type="text/javascript"></script> <script src="scripts/Default_html.js" type="text/javascript"></script> <script src="scripts/Page.xaml.js" type="text/javascript"></script> <script src="scripts/gallery.js" type="text/javascript"></script> <div align="center" id="SilverlightControlHost"> <script type="text/javascript"> createSilverlight(); </script> </div> </asp:Content>

    Read the article

  • How to detect if a file is PDF or TIFF ?

    - by eviljack
    Please bear with me as I've been thrown into the middle of this project without knowing all the background. If you've got WTF questions, trust me, I have them too. Here is the scenario: I've got a bunch of files residing on an IIS server. They have no file extension on them. Just naked files with names like "asda-2342-sd3rs-asd24-ut57" and so on. Nothing intuitive. The problem is I need to serve up files on an ASP.NET (2.0) page and display the tiff files as tiff and the PDF files as PDF. Unfortunately I don't know which is which and I need to be able to display them appropriately in their respective formats. For example, lets say that there are 2 files I need to display, one is tiff and one is PDF. The page should show up with a tiff image, and perhaps a link that would open up the PDF in a new tab/window. The problem: As these files are all extension-less I had to force IIS to just serve everything up as TIFF. But if I do this, the PDF files won't display. I could change IIS to force the MIME type to be PDF for unknown file extensions but I'd have the reverse problem. http://support.microsoft.com/kb/326965 Is this problem easier than I think or is it as nasty as I am expecting?

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • Google App Engine Java app couldn't find javac ?

    - by Frank
    I'm learning to use Google App Engine, I installed it in Netbeans, the project works, but when I clicked on "Deploy To Google App Engine", I got the following error : Beginning server interaction for ... 0% Creating staging directory 5% Scanning for jsp files. 8% Compiling jsp files. 11% Compiling java files. Error Details: Apr 20, 2010 3:51:23 PM org.apache.jasper.JspC processFile INFO: Built File: \PayPal_Monitor.jsp java.lang.IllegalStateException: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Unable to update app: cannot find javac executable based on java.home, tried "C:\Program Files (x86)\Java\jre6\bin\javac.exe" and "C:\Program Files (x86)\Java\bin\javac.exe" Please see the logs [C:\Users\NM\AppData\Local\Temp\appcfg3946701335172983337.log] for further information. The file "javac.exe" is in : C:\Program Files (x86)\Java\jdk1.6.0_18\bin How can I add it to "java.home" ? I'm using Win Vista, and I tried to add it from "System - Environment Variables", but there is no "java.home" in there. Where can I find it ? Frank

    Read the article

  • deep linking in Excel sheets exported to html

    - by pomarc
    hello everybody, I am working on a project where I must export to html a lot of Excel files. This is pretty straightforward using automation and saving as html. The problem is that many of these sheets have links to worksheets of some other files. I must find a way to write a link to a single inner worksheet. When you export a multisheet excel file to html, excel creates a main htm file, a folder named filename_file, and inside this folder it writes down several files: a css, an xml list of files, a file that creates the tab bar and several html files named sheetxxx.htm, each one representing a worksheet. When you open the main file, you can click the menu bar at the bottom which lets you select the appropriate sheet. This is in fact a link, which replaces a frame content with the sheetxxx.htm file. When this file is loaded a javascript function that selects the right tab gets called. The exported files will be published on a web site. I will have to post process each file and replace every link to the other xls files to the matching htm file, finding a way to open the right worksheet. I think that I could add a parameter to the processed htm file link url, such as myfile.htm?sh=sheet002.htm if I want to link to the second worksheet of myfile.htm (ex myfile.xls). After I've exported them, I could inject a simple javascript into each of the main files which, when they are loaded, could retrieve the sh parameter with jQuery (this is easy) and use this to somehow replace the frSheet frame contents (where the sheets get loaded), opening the right inner sheet and not the default sheet (this is what I call deep linking) mimicking what happens when a user clicks on a tab. This last step is missing... :) I am considering different options, such as replacing the source of the $("frSheet") frame after document.ready. I'd like to hear from you any advice on what could be the best way to realize that in your opinion. any help is greately appreciated, many thanks.

    Read the article

  • Access denied from another thread

    - by Lobuno
    Hello! In a program I span a thread ("the working thread"). Hera I copy some files write some data to a database and eventually, delete some other files or directories. Everything works fine. The problem is now, that I decided to move the deleting operation to some other thread. So the working thread now copies the files or directories, writes to the database, and , if there is a need to delete some other files this thread spans another thread and that second thread deleted the needed files or directories. The problem is that,the deletion used to work 100% when done in the working thread, now when the same is done in the secondary thread, I sometimes get an "Access denied" error and the files cannot be deleted. And no, the working thread is definitely NOT acceding the files and directories to delete at this moment. Sometimes (but not always) the main thread is impersonating some user, so if needed , the deleting thread is also running under impersonation just to grant the needed permissions to delete the files, so that should not be the problem. Anybody has a clue why this could be happening?

    Read the article

  • Android fill ImageView from URL

    - by Luke Batley
    Hi i'm trying to add an image to an ImageView from a URL i have tried loading it as a bitmap but nothing is showing. so does anyone know what the best method to do this is or what i'm doing wrong? heres my code @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); //Check Preferences which sets UI setContentView(R.layout.singlenews); TextView headerText = (TextView) findViewById(R.id.header_text); headerText.setText("Latest News"); PostTask posttask; posttask = new PostTask(); posttask.execute(); } public void loadNews(){ newsStr = getIntent().getStringExtra("singleNews"); try { JSONObject obj = new JSONObject(newsStr); content = obj.getString("content"); title = obj.getString("title"); fullName = obj.getString("fullname"); created = obj.getString("created"); NewsImageURL = obj.getString("image_primary"); tagline = obj.getString("tagline"); meta = "posted by: " + fullName + " " + created; URL aURL = new URL("NewsImageURL"); URLConnection conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); /* Buffered is always good for a performance plus. */ BufferedInputStream bis = new BufferedInputStream(is); /* Decode url-data to a bitmap. */ bm = BitmapFactory.decodeStream(bis); bis.close(); is.close(); /* Apply the Bitmap to the ImageView that will be returned. */ Log.v("lc", "content=" + content); Log.v("lc", "title=" + title); Log.v("lc", "fullname=" + fullName); Log.v("lc", "created=" + created); Log.v("lc", "NewsImage=" + NewsImageURL); Log.v("lc", "Meta=" + meta); Log.v("lc", "tagline=" + tagline); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public class PostTask extends AsyncTask<Void, String, Boolean> { @Override protected Boolean doInBackground(Void... params) { boolean result = false; loadNews(); publishProgress("progress"); return result; } protected void onProgressUpdate(String... progress) { StringBuilder str = new StringBuilder(); for (int i = 1; i < progress.length; i++) { str.append(progress[i] + " "); } } @Override protected void onPostExecute(Boolean result) { super.onPostExecute(result); Log.v("BGThread", "begin fillin data"); fillData(); } } public void fillData(){ NewsView = LayoutInflater.from(getBaseContext()).inflate(R.layout.newsdetailact, null); TextView Title = (TextView) NewsView.findViewById(R.id.NewsTitle); Title.setText(title); TextView Tagline = (TextView) NewsView.findViewById(R.id.subtitle); Tagline.setText(tagline); TextView MetaData = (TextView) NewsView.findViewById(R.id.meta); MetaData.setText(meta); ImageView NewsImage = (ImageView)NewsView.findViewById(R.id.imageView2); NewsImage.setImageBitmap(bm); TextView MainContent = (TextView) NewsView.findViewById(R.id.maintext); MainContent.setText(content); Log.v("BGThread", "Filled results"); adapter = new MergeAdapter(); adapter.addView(NewsView); setListAdapter(adapter); } }

    Read the article

  • Generating ul from array with php fails

    - by Toni Michel Caubet
    Given a $files array I am trying to generate a list, like this: <ul class=""> <? for($i=0;$i < count($files); $i++) { $text = str_replace("http://domain.com/files/uploads/", "", $files[$i]); $file = $files[$i]; $notme = sesion()>0 && $obj['id'] != sesion(); ?> <li> <a target="_blank" href="<?=$file?>"><?=$text?></a> <? if($notme){ ?> <span class="add_to_files" data-file="<?=$file?>">Agregar al gestor</span> <? } ?> </li> <? } ?> </ul> This is the output: Please note how the span html is wrong, <ul class=""> <li> <a href="http://domain.com/files/uploads/388400967232883.jpg" target="_blank">388400967232883.jpg</a> <span target="_blank" href="http://domain.com/files/uploads/388400967232883.jpg" url"="" data-file="&lt;a class=" class="add_to_files">http://domain.com/files/uploads/388400967232883.jpg"&gt;Agregar al gestor</span> </li> </ul> Any idea why?

    Read the article

  • How do I make java wait for boolean to run funciton

    - by TWeeKeD
    I'm sure this is pretty simple but I can't figure out and it sucks I'm up on suck on (what should be) an easy step. ok. I have a method that runs one function that give a response. this method actually handles the uploading of the file so o it takes a second to give a response. I need this response in the following method. sendPicMsg needs to complete and then forward it's response to sendMessage. Please help. b1.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { if(!uploadMsgPic.equalsIgnoreCase("")){ Log.v("response","Pic in storage"); sendPicMsg(); sendMessage(); }else{ sendMessage(); } 1st Method public void sendPicMsg(){ Log.v("response", "sendPicMsg Loaded"); if(!uploadMsgPic.equalsIgnoreCase("")){ final SharedPreferences preferences = this.getActivity().getSharedPreferences("MyPreferences", getActivity().MODE_PRIVATE); AsyncHttpClient client3 = new AsyncHttpClient(); RequestParams params3 = new RequestParams(); File file = new File(uploadMsgPic); try { File f = new File(uploadMsgPic.replace(".", "1.")); f.createNewFile(); //Convert bitmap to byte array Bitmap bitmap = decodeFile(file,400); ByteArrayOutputStream bos = new ByteArrayOutputStream(); bitmap.compress(CompressFormat.PNG, 0 /*ignored for PNG*/, bos); byte[] bitmapdata = bos.toByteArray(); //write the bytes in file FileOutputStream fos = new FileOutputStream(f); fos.write(bitmapdata); params3.put("file", f); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } params3.put("email", preferences.getString("loggedin_user", "")); params3.put("webversion", "1"); client3.post("http://peekatu.com/apiweb/msgPic_upload.php",params3, new AsyncHttpResponseHandler() { @Override public void onSuccess(String response) { Log.v("response", "Upload Complete"); refreshChat(); //responseString = response; Log.v("response","msgPic has been uploaded"+response); //parseChatMessages(response); response=picurl; uploadMsgPic = ""; if(picurl!=null){ Log.v("response","picurl is set"); } if(picurl==null){ Log.v("response", "picurl no ready"); }; } }); sendMessage(); } } 2nd Method public void sendMessage(){ final SharedPreferences preferences = this.getActivity().getSharedPreferences("MyPreferences", getActivity().MODE_PRIVATE); if(preferences.getString("Username", "").length()<=0){ editText1.setText(""); Toast.makeText(this.getActivity(), "Please Login to send messages.", 2); return; } AsyncHttpClient client = new AsyncHttpClient(); RequestParams params = new RequestParams(); if(type.equalsIgnoreCase("3")){ params.put("toid",user); params.put("action", "sendprivate"); }else{ params.put("room", preferences.getString("selected_room", "Adult Lobby")); params.put("action", "insert"); } Log.v("response", "Sending message "+editText1.getText().toString()); params.put("message",editText1.getText().toString() ); params.put("media", picurl); params.put("email", preferences.getString("loggedin_user", "")); params.put("webversion", "1"); client.post("http://peekatu.com/apiweb/messagetest.php",params, new AsyncHttpResponseHandler() { @Override public void onSuccess(String response) { refreshChat(); //responseString = response; Log.v("response", response); //parseChatMessages(response); if(picurl!=null) Log.v("response", picurl); } }); editText1.setText(""); lv.setSelection(adapter.getCount() - 1); }

    Read the article

  • BAD ARCHIVE MIRROR using PXE BOOT method

    - by omkar
    i m trying to automatically install UBUNTU on a client PC by using the method of PXE BOOT method....my Objectives are below:- i m following the steps given in this link installation using PXE BOOT INSTALL 1:-the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. 2:-the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server i have installed DHCP3-server,Apache2 and TFTP for helping me with the installation. i have nearly achieved my first objective,i m able to boot my client using the files stored in the server,but during the installation stage it is asking me to "CHOOSE A MIRROR of UBUNTU ARCHIVE".i gave the server's IP address and the path in the server where the files are located but then too its giving me error "BAD ARCHIVE MIRROR". so is it possible that instead of downloading all the files from the internet and storing them on my disk , can i use the files which comes with the UBUNTU-CD, and how to store this files in what format (should i zip them ) on the disk. secondly i am also generating the ks.cfg which i wanted to give to the client for automatic installation of the OS ,so how should the configuration file be given to the installation process.

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • Using dropbox / symbolic link combo successfully

    - by wim
    In the past I have kept some files on dropbox by copying them into my ~/Dropbox folder on Ubuntu. I don't want to move the original files into Dropbox synch folder or muck around with my directory structure. Then I have found I was using dropbox more and more, and wasting a lot of space this way by duplication of data. I use a small SSD locally for OS, any other data is kept on mounted shares from my NAS. I found I could successfully get files up to the cloud by using symbolic links like: ln -s /some/mounted/share/dir ~/Dropbox/dir And dropbox would carry on and sync those files remotely whilst only using up the space of the symbolic link locally. This worked well for me for a few weeks, until I turned on my laptop one day and saw '421 files have been removed from your dropbox' notification. They were still there in the original mounted share, but the symbolic links I'd made were completely gone for some reason. What did I do wrong? It is possible the share could have become unmounted, but I didn't expect this would cause all my files to be deleted from the cloud could it? How can I 'share' files on my dropbox in this way without the danger of the originals being modified from remotely?

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >