Search Results

Search found 9217 results on 369 pages for 'cross apply'.

Page 323/369 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • pix 501, static route to d-link router (different subnet)

    - by ra170
    I have pix 501 cisco firewall with internal ip 192.168.10.1. I have connected d-link router (dir-655) to pix 501. The d-link router has internal ip 192.168.0.1 The picture would like something like that: |pix 501| has 192.168.10.1 ip |DIR-655| has 192.168.0.1 ip 1. |cable modem|----|pix 501|-------|DIR-655|-----PC 2. PC--------|pix 501|---------|DIR-655| | | |cable modem| When I'm on the wireless network (dir-655) with assigned ip of 192.168.0.x I can cross the subnet and connect to my firewall 192.168.10.1. (pic. 1) The problem is that if I'm on the 192.168.10.x network I can't connect to anything over at 192.168.0.x network. (pic.2) I've tried entering a static route like this: `route inside 192.168.0.0 255.255.255.0 192.168.10.1 1` I also tried assigning static ip to wan interface on DIR-655 to 192.168.10.30 and then tried this: route inside 192.168.0.0 255.255.255.0 192.168.10.30 1 But still, can't connect to 192.168.0.1 or anything on that subnet. Is there a way to setup a static route? Would adding a separate router between PIX 501 and DIR-655 help? I would think that static route like this should take care of it, but it doesn't. This is my route config and nat: (config)# sh route outside 0.0.0.0 0.0.0.0 (outside_IP) 1 DHCP static outside (outside_IP) 255.255.248.0 (outside_IP) 1 CONNECT static inside 192.168.0.0 255.255.255.0 192.168.10.1 1 OTHER static inside 192.168.10.0 255.255.255.0 192.168.10.1 1 CONNECT static or (route inside 192.168.0.0 255.255.255.0 192.168.10.30 1) (config)# sh nat nat (inside) 1 192.168.1.0 255.255.255.0 0 0 nat (inside) 1 192.168.10.0 255.255.255.0 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 I ended up turning DIR-655 into an Access Point (turning off DHCP and pluging cable from PIX lan interface into one of the LAN interfaces on DIR-655, and leaving WAN port empty), that works as far as DIR-655 being on the same subnet now, and I can access every machine. However the question is, why can't I simply route between those two? would router between these two help? One of the reasons is, that the PIX 501 has only 10 licences, so now I'm using almost all of them. (I have few computers, iphones, ps3, print server, etc.) I would really appreciate some help! Thanks.

    Read the article

  • Hell: NTFS "Restore previous versions"...

    - by ttsiodras
    The hell I have experienced these last 24h: Windows 7 installation hosed after bluetooth driver install. Attempting to recover using restore points via "Repair" on the bootable Win7 installation CD. Attempting to go back one day in the restore points. No joy. Attempting to go back two days in the restore points. No joy. Attempting to go back one week in the restore points. Stil no joy. Windows won't boot. Apparently something is REALLY hosed. And then it hits me - PANIC - the restore points somehow reverted DATA files to their older versions! Word, Powerpoint, SPSS, etc document versions are all one week old now. Using the "freshest" restore point. Failed to restore yesterday's restore point!!! I am stuck at old versions of the data!!! Booting KNOPPIX, mounting NTFS partition as read-only under KNOPPIX. Checking. Nope, the data files are still the one week old versions. Booting Win7 CD, Recovery console - Cmd prompt - navigating - yep, data files are still one week old. Removing the drive, mounting it under other Win7 installation. Still old data. Running NTFS undelete on the drive (read-only scan), searching for file created yesterday. Not found. Despair. At this point, idea: I will install a brand new Windows installation, keeping the old one in Windows.old (default behaviour of Windows installs). I boot the new install, I go to my C:\Data\ folder, I choose "Restore previous versions", click on yesterday's date, and click open... YES! It works! I can see the latest versions of my files (e.g. from yesterday). Thank God. And then, I try to view the files under the "yesterday snapshot-version" of c:\Users\MyAccount\Desktop ... And I get "Permission Denied" as soon as I try to open "Users\MyAccount". I make sure I am an administrator. No joy. Apparently, the new Windows installation does not have access to read the "NTFS snapshots" or "Volume Shadow Snapshots" of my old Windows account! Cross-installation permissions? I need to somehow tell the new Windows install that I am the same "old" user... So that I will be able to access the "Users\MyAccount" folder of the snapshot of my old user account. Help?

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • What is it that kills laptop batteries?

    - by Mala
    There are many superstitions on what you must never do lest your battery become worthless - and by worthless I mean hold about 16 - 24 seconds of charge. This has happened to every laptop I have ever owned, and I just got a new one, so please help me sort out fact from fiction. Here are some of the things I've heard: Do not keep your laptop fully charged. You must run it completely down every so often. Do not use your laptop plugged in to the wall. Only plug it in when it needs charge. If you will not be using your laptop for a long period of time, don't leave it at full charge. Do not leave your laptop running 24/7. The first two I know to be complete fiction: this was true of old batteries such as you might have had in an iPod in 2003, but modern batteries function better when kept at or near full charge. Devices even have circuitry to prevent you from completely depleting your battery, as this is dangerous. The third point sounds probable, and I'd be interested to know if it was true. However, it doesn't really apply to me because I'm not really the type to leave my laptop alone for a day, much less a "long period of time" The fourth seems most likely of the above, but only because of causality: I have always done this, and my batteries have always crapped out on me. I've generally treated a laptop like a desktop with a battery backup, and that I can move from one room to another if necessary. The fact that my batteries tend to last less than 30 seconds has further entrenched this behavior. Should I be trying to break this habit? Are there any other things that ruin laptop batteries? I love that I can actually use my new laptop unplugged :) I'd like to keep it that way. Update: Additional question: If the computer will be used for an extended period of time plugged in, does it make sense to remove the battery first? Update 2: I know people with laptops older than mine, who actively use their laptops as much as I do, and their batteries still hold about an hours' charge while mine holds less than 30 seconds, hence my belief that something I'm doing kills them.

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Operative systems on SD cards

    - by HisDudeness
    I was getting some wild ideas the last days, like putting some operative systems into SD cards rather than on my hard drive. I'll go further into details now and explain what lead me to consider this probably abominable decision. I am on a laptop (that means I have a native SD-card reader) which is currently running a cross-distro setup, with a bunch of Linux systems (placed in dedicated ext4 logical partitions into a huge extended one) regulated by an unique GRUB. Since today, my laptop haven't even seen any Windows system with binoculars. I was thinking about placing all the os part of my setup into a Secure Digital to save all my 500 Gb Hard Drive for documents, music, videos and so on, and being able to just remove the SD and boot my system into another computer too, as well as having the possibility of booting other systems into mine by just plugging in another SD, without having to keep it constantly placed in my PC. Also, in the remote case in the near future I just wanted to boot Windows 8 in it, I read it causes major boot incompatibility issues with other systems by needing a digital signature in order for them to start. By having it in a removable drive, I could just get rid of it when I'm needing him and switch its card with Linux one, and so not having any obstacles to their boot. Now, my questions are: I know unlikely traditional rotating disk drives, integrated circuits ones have a limited lifespan in terms of cluster rewriting. Is it an obstacle to that kind of usage? I mean, some Ultrabooks are using SSD now, is it the same issue, or there are some differences between Solid State Drives and Secure Digitals in that sense? Maybe having them to store system files which are in fixed positions (making the even-usage of cluster technology useless) constantly being re-read and updated and similar things just gets them soon unserviceable, do it? Second question: are all motherboards and BIOSes able to boot from SDs just like they are from USB pen drives (I mean, provided card reader is USB-connected, isn't it)? Or can't bootloaders like GRUB be installed on SDs working? If they can't, is it a solution installing GRUB to MBR and making boot option pointing to SD? Will it work? Are there any other problems to installing OSs on a Secure Digital?

    Read the article

  • Can't save screen resolution setting.

    - by Searock
    Hi, My screen resolution in windows and previous version of Ubuntu (9.04) was 1152 x 864. But in Ubuntu 10.04 it gives me an option of 1024 x 786 and 1360 x 786. I have some how managed to add 1152x684 resolution by using xrandr command. searock@searock-desktop:~$ cvt 1152 864 1152x864 59.96 Hz (CVT 1.00M3) hsync: 53.78 kHz; pclk: 81.75 MHz Modeline "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync searock@searock-desktop:~$ xrandr --addmode S-video 1152x864 xrandr: cannot find output "S-video" searock@searock-desktop:~$ xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1360x768 59.8 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 59.9 1152x864_60.00 (0x124) 81.0MHz h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.3KHz v: height 864 start 867 end 871 total 897 clock 59.4Hz searock@searock-desktop:~$ xrandr --addmode VGA1 1152x864_60.00 But the problem is when ever I restart my computer I get this message. Could not apply the stored configuration for the monitors. Could not find a suitable configuration of screens. And then it comes back to 1024 x 786 My graphic card details : Intel(R) 82945G Express Chipset Family. Is there any way I can fix this once for all ? Thanks. Edit 1 : rumtscho has suggested me to modify xorg.conf file. But I am not sure what HorizSync means? is it Horizontal frequency ? My monitor model is Acer v173. Here's my specification. So what should be HorizSync and VertRefresh ? Edit 2 : I have edited my Xorg.conf file as follows : Section "Monitor" Identifier "Configured Monitor" HorizSync 30-80 VertRefresh 55-75 EndSection then I added the resolution and restarted my computer and still I am facing the same problem. Is there something that I am missing? Edit 3 : For now I have edited /etc/gdm/Init/Default(gdm startup scripts) to include following xrandr commands, just below line initctl -q emit login-session-start DISPLAY_MANAGER=gdm xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00<br/> xrandr -s 1152x864_60.00 This has solved my problem, but this commands have increased my computer's boot time. I think I will have to edit xorg file properly. Edit 4 : Instead of adding this files to gdm startup scripts I have created a shell script and added it to startup (System - Preference - Startup Applications) #!/bin/bash xrandr --newmode "1152x864_60.00" 81.75 1152 1216 1336 1520 864 867 871 897 -hsync +vsync xrandr --addmode VGA1 1152x864_60.00 xrandr -s 1152x864_60.00 And don't forget to add execution rights. (Right Click - Properties - Permission - Allow executing file as program)

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • How to Configure Source NAT (Private IP => Public IP Outbound)

    - by DavidScherer
    I'm running VMWare ESXi Free and have Zentyal SBS 3.2 running as a Gateway. I have 5 Public IPS (CIDR/29, let's call them 69.1.1.1 - 69.1.1.5) and currently Zentyal is bound to 69.1.1.1 as the Gateway, with the other 4 Public IPs set as Virtual Interfaces in Zentyal (wan2-wan5) I have machines sitting on the Private Network (10.34.251.x) that, when going Outbound (to Google for instance) should be seen by the Internet as an IP other than the Gateway (69.1.1.1), this is because our machines need to be able to communicate with 3rd party APIs that expect these requests to come from a specific IP. From what I could find, SNAT (Source NAT) in Zentyal is used to achieve this, but I'm not sure how to configure it and cannot find a specific piece of Documentation for it at Zentyal. I've tried setting this up a couple different ways, with no results and at this point I have no idea if I'm going about this completely wrong, or my lack of experience with networking and the associated terminology is preventing me from placing the correct values in the correct fields. I get the following form to set up "SNAT" rules in Zentyal: Perhaps someone can offer some guidance and definitions for the fields above? SNAT Address Is this the Public IP I want to masquerade? Outgoing Interface Should this by my External NIC (one connected to Public 'Net), or is it the "Private" interface? It sounds as though this should be the External interface as I want the traffic from the internal network sent Out over this Interface (using a different IP than normal, anyway) Source Is the the Source on the internal network (one of the private IPs?), a public IP I want to masquerade as, or something else entirely? Destination Is this a place on the Internet (eg, "Only do this for the Site Google.com"/IP) or am I allowing myself to become confused again? Service I'm assuming this allows me to restrict which services this rule will apply to, but is it for a service on the internal network or a service being accessed on the external network? If I can offer any further details or information to make what I'm trying to do more clear, I will happily do so. Honestly any kind of help here would be very appreciated. I'm not a NetOps or anything even close, I spend most of my day writing code and my entire "team" at this company consists of "me, myself, and I" so while I try to broaden my KB at every possible opportunity, I can only learn so much, so fast and I feel like with networking especially there's just so much, coupled with a learning curve for each solution that likes to (from my limited perspective) use slightly different terminology that what I'm used to (and I don't exactly have the necessary experience to cross reference this stuff with the stuff I already know in context).

    Read the article

  • Is Samba Server what I'm looking for, and if so, what do I need? (currently on DD-WRT Micro)

    - by Anthony
    I am really confused as to what Samba actually does and how it works. Here's what I'm hoping it does: I set up a Samba server on my LAN, and everyone will be able to see each other's shared files and swap them. But some of the documentation makes it sound like it will just allow Mac/Linux computers to see Windows computers. Other bits of the documentation make it sound more like a local server, where a Linux machine would install Samba and they would see everyone and be visible to everyone, but that won't change if anybody else can see each other. While still other things I've read make it seem more like a file-server, where everyone sees each other but file transfers are not peer-to-peer but instead need a host disk for files to act as go between. So, assuming I'm even in the right ballpark of what Samba does in terms of my goal of total cross-visibility on the network, I am left with needing to know what I'd need to set up the server and whether it can be done and is worth it... DD-WRT's article on Samba is a bit ambiguous. One second it sounds as if I can run the server on micro as long as it's set up on a usb drive, but then it also sounds like micro can't run it at all, etc. If I can run it from a usb-connected drive, I still need to know if the files are actually stored on that drive. The dd-wrt article mentions: You can run a Samba server on your main computer and run a client on your router (thus gaining writable storage for the router) or you can use Samba to share a drive connected (typically by USB) to the router among all the computers connected to your network. That one part "to share a drive...among all the computers" makes it sound like the only benefit I get from Samba is a share drive that any OS on the network can see, but they still won't see each other. But I'm very hopeful I'm misreading this. If the computers can see each other but still need the disk, how much space is generally a good idea? I'm basing this on the idea that the drive is a temporary store point. Obviously I'd have to get a drive big enough to store everything people wanted to share if the drive is a full-on file server. If I do have this all wrong, is there any software that achieves what I have in mind? Something that connects to the main router to bridge all clients?

    Read the article

  • Why is domU faster than dom0 on IO?

    - by Paco
    I have installed debian 7 on a physical machine. This is the configuration of the machine: 3 hard drives using RAID 5 Strip element size: 1M Read policy: Adaptive read ahead Write policy: Write Through /boot 200 MB ext2 / 15 GB ext3 SWAP 10GB LVM rest (~500GB) emphasized text I installed postgresql, created a big database (over 1GB). I have an SQL request that takes a lot of time to run (a SELECT statement, so it only reads data from the database). This request takes approximately 5.5 seconds to run. Then, I installed XEN, created a domU, with another debian distro. On this OS, I also installed postgresql, with the same database. The same SQL request takes only 2.5 seconds to run. I checked the kernel on both dom0 and domU. uname-a returns "Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux" on both systems. I checked the kernel parameters, which are approximately the same. For those that are relevant, I changed their values to make them match on both systems using sysctl. I saw no changes (the requests still take the same amount of time). After this, I checked the file systems. I used ext3 on domU. Still no changes. I installed hdparm, and ran hdparm -Tt on both systems, on all my partitions on both systems, and I get similar results. Now, I am stuck, I don't know what is different, and what could be the cause of such a big difference. Additional Info: Debian runs on a Dell server PowerEdge 2950 postgresql: 9.1.9 (both dom0 and domU) xen-linux-system: 3.2.0 xen-hypervisor: 4.1 Thanks EDIT: As Krzysztof Ksiezyk suggested, it might be due to some file caching system. I ran the dd command to test both the read and write speed. Here is domU: root@test1:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 18.8289 s, 107 MB/s root@test1:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 2020+0 records in 2020+0 records out 2020000000 bytes (2.0 GB) copied, 15.0549 s, 134 MB/s And here is dom0: root@debian:~# dd if=/dev/zero of=/root/dd count=5MB bs=1MB ^C1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 8.87281 s, 191 MB/s root@debian:~# dd if=/root/dd of=/dev/null count=5MB bs=1MB 1693+0 records in 1693+0 records out 1693000000 bytes (1.7 GB) copied, 0.501509 s, 3.4 GB/s What can be the cause of this caching system? And how can we "fix" it? Can we apply it to dom0? EDIT 2: I switched my virtual disk type. To do so I followed this article. I did a dd if=/dev/vg0/test1-disk of=/mnt/test1-disk.img bs=16M Then in /etc/xen/test1.cfg, I changed the disk parameter to use file: instead of phy: it should have removed the file caching, but I still get the same numbers (domU being much faster for Postgres)

    Read the article

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • Windows Server 2003 Standard R2 CD 1 cannot boot: freeze at No Emulation

    - by TGP1994
    Hi everyone. I've been interested in the Windows Server line of OSes, so since I apply for DreamSpark, I thought I'd go download it and try it. I just so happened to have an old desktop that I was using awhile ago for Windows XP, so I imaged the drive in preparation for it to be overwritten with the new OS. (This system has an Asus A7V8X-X motherboard, an AMD Athlon XP 2800+ processor, and 1GB of RAM.) I tried burning the first disk image on my newer desktop computer, running Windows XP, although the CD burner consistently failed at a particular track area from cd to cd, so it seemed like the burner was toast there. Fortunately, I had a laptop, so I transferred the images over to that, then burned the first disc there. First time around went great, and the burning program reported no errors. I then took the CD over to the computer that I was intending to install Server onto, set the BIOS to boot from the CD drive, then I booted it up. Like normal, after the POST, it printed "Boot from ATAPI CD-Rom: No Emulation", which I was used to seeing with bootable cds. I waited for the "Press any key to continue..." message that I had become so familiar with in windows discs, although I saw none. The computer sat there for about 5 seconds with the cd spinning, then it spun down like it was done reading it. Nothing else happened. No response from the keyboard. I tried again, same result. I then downloaded IMGBurn, and I put the burned cd into the laptop that burned it originally. I also downloaded a fresh image from the dreamspark site. I ran a verify session, and everything checked out. I later tried getting various DOS startup discs, then I tried booting the winnt binary, which supposedly initiates the installation process. Either the shells reported that not enough memory was available (since they would be running in low memory mode), or FreeDOS in particular would report Illegal instructions right away. Is the image corrupt at dreamspark, or am I doing something wrong?

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Dlink search is hijacking my browser

    - by James
    For months now "DLink search" has been hijacking my search engines. I use google chrome, and I have organized my search engines in the handy dandy "manage search engines" tool about a TRILLION times. It never even says D-link is hacking my search engines. It does not show up! I have read many posts on this forum and others saying that to fix this problem from internet explorer: Setup, internet options, yadayada, magical fairies, and you are solved, but my browser is google chrome! How am I supposed to do this from there! I do not know how to re-setup my Dlink router, which is the cause of the problem! HOW? In those posts with the magical fairies fixing it, HUNDREDS responded saying, "yep, those fairies definitely fixed it right. :)" These people were so satisfied. IT WORKED FOR THEM, WHY NOT ME. I look at it and go ":(" because it does not help me. There are no options for anything to do with this in GOOGLE chrome. PLEASE EXPLAIN and HELP. I see no "SETUP" option, no "Internet Options" button, no anything. BTW the exact posts are these: "Uncheck Advanced DNS in the router internet setup. This will take care of it. I had this problem with my DLink router before." "I had this issue with my DIR-655 and unchecking the Advanced DNS setting in Setup - Internet - Manual Internet Connection Setup fixed it." "If this is just internet explorer, you can go to Tools Internet Options or Internet Options in Control Panel. From here, go to the advanced tab and click the Reset button." "I would set the router's DNS to a site like OpenDNS, and I would ensure the machines are set to get their DNS settings via DHCP or set the machine's DNS setting to OpenDNS. If the router's DNS looks like it was messed with, some bad software know the default passwords for routers and could have changed it. If you don't already I would make sure the password to the router is not default or easy to guess. I've had spyware change a machine's DNS, but the fact it is happening on all machines makes me wonder if it is the router." "Something got into your router and changed the dns server most likely, do a hard reset of the router and then change the password to something strong. Also check for a firmware update for the router and apply it as soon as possible."

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Achieving A 1600 x 900 Resolution For (Guest OS) Ubuntu Under (Host OS) Windows 7

    - by panamack
    I am running Ubuntu 11.10 as a guest OS using VirtualBox 4.1.16 installed on Windows 7 Ultimate. On my laptop I'd like to be able to run Ubuntu in full screen mode at 1600 x 900. I only have options within the virtual machine to select 4:3 display settings such as 1600 x 1200, 1440 x 1050 etc. I have guest additions installed. At the windows command prompt, I tried typing: VBoxManage setextradata "Virtual Ubuntu Coursera ESSAAS" "CustomVideoMode1" "1600x900x16" This didn't work, still no 1600 x 900 res available in Ubuntu. I tried this having read the following section of the VirtualBox help (this also says something about a 'video mode hint feature' not sure what this means): 9.7. Advanced display configuration 9.7.1. Custom VESA resolutions Apart from the standard VESA resolutions, the VirtualBox VESA BIOS allows you to add up to 16 custom video modes which will be reported to the guest operating system. When using Windows guests with the VirtualBox Guest Additions, a custom graphics driver will be used instead of the fallback VESA solution so this information does not apply. Additional video modes can be configured for each VM using the extra data facility. The extra data key is called CustomVideoMode with x being a number from 1 to 16. Please note that modes will be read from 1 until either the following number is not defined or 16 is reached. The following example adds a video mode that corresponds to the native display resolution of many notebook computers: VBoxManage setextradata "VM name" "CustomVideoMode1" "1400x1050x16" The VESA mode IDs for custom video modes start at 0x160. In order to use the above defined custom video mode, the following command line has be supplied to Linux: vga = 0x200 | 0x160 vga = 864 For guest operating systems with VirtualBox Guest Additions, a custom video mode can be set using the video mode hint feature. UPDATE 02.06.12 I've just tried creating a new virtual machine using the same original disk image I had been given. This had Guest Additions v 4.1.6 installed and provided me with the 1600 x 900 full screen display I want. It's after I then install Guest Additions v 4.1.16 (the version included with my VirtualBox installation) that my only choices are 4:3 displays e.g. 1600 x 1200. Seems this is the cause.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - RunAs /savecred: Nice idea, but appears to be completely insecure. - RUNASSPC - Same concept as RunAs, uses "encrypted" files with credentials, but checks in user-space. - Scheduled Tasks - "Fixed" permissions make this difficult, and doesn't support interactive processes even if it did. - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • NGINX SSL Certificate Not Working

    - by LeSamAdmin
    I've been working on SSL stuff and getting nowhere from like 4 tutorials... I've bought an SSL for pingrglobe.com, and now trying to apply it to my servers. Here's my nginx code: http { server { listen 80; server_name pingrglobe.com; rewrite ^(.*) http://www.pingrglobe.com$1 permanent; } server { listen 443; ssl on; ssl_certificate /etc/nginx/ssl/pingrglobe.crt; ssl_certificate_key /etc/nginx/ssl/pingrglobe.key; #enables SSLv3/TLSv1, but not SSLv2 which is weak and should no longer be used. ssl_protocols SSLv3 TLSv1; #Disables all weak ciphers ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; server_name www.pingrglobe.com; root /var/www/pingrglobe.com; index index.html index.php; location / { try_files $uri $uri/ @extensionless-php; add_header Access-Control-Allow-Origin *; } rewrite ^/blog/blogpost/(.+)$ /blog/blogpost?post=$1 last; rewrite ^/viewticket/(.+)/(.*)$ /viewticket?tid=$1&$2 last; rewrite ^/vemail/(.+)$ /vemail?eid=$1 last; rewrite ^/serversettings/(.+)$ /serversettings?srvid=$1 last; rewrite ^/notification/(.+)$ /notification?id=$1 last; rewrite ^/viewreport/(.+)$ /viewreport?srvid=$1 last; rewrite ^/removeserver/(.+)$ /removeserver?srvid=$1 last; rewrite ^/staffviewticket/(.+)/(.*)$ /staffviewticket?tid=$1&$2 last; rewrite ^/activate/(.*)/(.*)/(.*)$ /activate?user=$1&code=$2&email=$3 last; rewrite ^/activate2/(.*)/(.*)/(.*)$ /activate2?user=$1&code=$2&email=$3 last; rewrite ^/passwordtoken/(.+)/(.*)/(.*)$ /passwordtoken?user=$1&token=$2&email=$3 last; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location @extensionless-php { rewrite ^(.*)$ $1.php last; } location ~ /\. { deny all; } } } SSL doesn't work as you see here: https://www.pingrglobe.com

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Google Rules for Retail

    - by David Dorf
    In the book What Would Google Do?, Jeff Jarvis outlines ten "Google Rules" that define how Google acts.  These rules help define how Web 2.0 businesses operate today and into the future.  While there's a chapter in the book on applying these rules to the retail industry, it wasn't very in-depth.  So I've decided to more directly apply the rules to retail, along with some notable examples of success.  The table below shows Jeff's Google Rule, some Industry Examples, and New Retailer Rules that I created. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-themecolor:text1; mso-border-alt:solid black .5pt; mso-border-themecolor:text1; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insideh-themecolor:text1; mso-border-insidev:.5pt solid black; mso-border-insidev-themecolor:text1; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Google Rule Industry Examples New Retailer Rule New Relationship Your worst customer is your friend; you best customer is your partner Newegg.com lets manufacturers respond to customer comments that are critical of the product, and their EggXpert site lets customers help other customers. Listen to what your customers are saying about you.  Convert the critics to fans and the fans to influencers. New Architecture Join a network; be a platform Tesco and BestBuy released APIs for their product catalogs so third-parties could create new applications. Become a destination for information. New Publicness Life is public, so is business Zappos and WholeFoods founders are prolific tweeters/bloggers, sharing their opinions and connecting to customers.  It's not always pretty, but it's genuine. Be transparent.  Share both your successes and failures with your customers. New Society Elegant organization Wet Seal helps their customers assemble outfits and show them off to each other.  Barnes & Noble has a community site that includes a bookclub. Communities of your customers already exist, so help them organize better. New Economy Mass market is dead; long live the mass of niches lululemon found a niche for yoga inspired athletic wear.  Threadless uses crowd-sourcing to design short-runs of T-shirts. Serve small markets with niche products. New Business Reality Decide what business you're in When Lowes realized catering to women brought the men along, their sales increased. Customers want experiences to go with the products they buy. New Attitude Trust the people and listen In 2008 Starbucks launched MyStartbucksIdea to solicit ideas from their customers. Use social networks as additional data points for making better merchandising decisions. New Ethic Be honest and transparent; don't be evil Target is giving away reusable shopping bags for Earth Day.  Kohl's has outfitted 67 stores with solar arrays. Being green earns customers' respect and lowers costs too. New Speed Life is live H&M and Zara keep up with fashion trends. Be prepared to pounce on you customers' fickle interests. New Imperatives Encourage, enable and protect innovation 1-800-Flowers was the first do sales in Facebook and an early adopter of mobile commerce.  The Sears Personal Shopper mobile app finds products based on a photo. Give your staff permission to fail so innovation won't be stifled. Jeff will be a keynote speaker at Crosstalk, our upcoming annual user conference, so I'm looking forward to hearing more of his perspective on retail and the new economy.

    Read the article

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

  • Windows Azure: Announcing Support for Windows Server 2012 R2 + Some Nice Price Cuts

    - by ScottGu
    Today we released some great updates to Windows Azure: Virtual Machines: Support for Windows Server 2012 R2 Cloud Services: Support for Windows Server 2012 R2 and .NET 4.5.1 Windows Azure Pack: Use Windows Azure features on-premises using Windows Server 2012 R2 Price Cuts: Up to 22% Price Reduction on Memory-Intensive Instances Below are more details about each of the improvements: Virtual Machines: Support for Windows Server 2012 R2 This morning we announced the release of Windows Server 2012 R2 – which is a fantastic update to Windows Server and includes a ton of great enhancements. This morning we are also excited to announce that the general availability image of Windows Server 2012 RC is now supported on Windows Azure.  Windows Azure is the first cloud provider to offer the final release of Windows Server 2012 R2, and it is incredibly easy to launch your own Windows Server 2012 R2 instance with it. To create a new Windows Server 2012 R2 instance simply choose New->Compute->Virtual Machine within the Windows Azure Management Portal.  You can select the “Windows Server 2012 R2” image and create a new Virtual Machine using the “Quick Create” option: Or alternatively click the “From Gallery” option if you want to customize even more configuration options (endpoints, remote powershell, availability set, etc): Creating and instantiating a new Virtual Machine on Windows Azure is very fast.  In fact, the Windows Server 2012 R2 image now deploys and runs 30% faster than previous versions of Windows Server. Once the VM is deployed you can drill into it to track its health and manage its settings: Clicking the “Connect” button allows you to remote desktop into the VM – at which point you can customize and manage it as a full administrator however you want: If you haven’t tried Windows Server 2012 R2 yet – give it a try with Windows Azure.  There is no easier way to get an instance of it up and running! Cloud Services: Support for using Windows Server 2012 R2 with Web and Worker Roles Today’s Windows Azure release also allows you to now use Windows Server 2012 R2 and .NET 4.5.1 within Web and Worker Roles within Cloud Service based applications.  Enabling this is easy.  You can configure existing existing Cloud Service application to use Windows Server 2012 R2 by updating your Cloud Service Configuration File (.cscfg) to use the new “OS Family 4” setting: Or alternatively you can use the Windows Azure Management Portal to update cloud services that are already deployed on Windows Azure.  Simply choose the configure tab on them and select Windows Server 2012 R2 in the Operating System Family dropdown: The approaches above enable you to immediately take advantage of Windows Server 2012 R2 and .NET 4.5.1 and all the great features they provide. Windows Azure Pack: Use Windows Azure features on Windows Server 2012 R2 Today we also made generally available the Windows Azure Pack, which is a free download that enables you to run Windows Azure Technology within your own datacenter, an on-premises private cloud environment, or with one of our service provider/hosting partners who run Windows Server. Windows Azure Pack enables you to use a management portal that has the exact same UI as the Windows Azure Management Portal, and within which you can create and manage Virtual Machines, Web Sites, and Service Bus – all of which can run on Windows Server and System Center.  The services provided with the Windows Azure Pack are consistent with the services offered within our Windows Azure public cloud offering.  This consistency enables organizations and developers to build applications and solutions that can run in any hosting environment – and which use the same development and management approach.  The end result is an offering with incredible flexibility. You can learn more about Windows Azure Pack and download/deploy it today here. Price Cuts: Up to 22% Reduction on Memory Intensive Instances Today we are also reducing prices by up to 22% on our memory-intensive VM instances (specifically our A5, A6, and A7 instances).  These price reductions apply to both Windows and Linux VM instances, as well as for Cloud Service based applications: These price reductions will take effect in November, and will enable you to run applications that demand larger memory (such as SharePoint, Databases, in-memory analytics, etc) even more cost effectively. Summary Today’s release enables you to start using Windows Server 2012 R2 within Windows Azure immediately, and take advantage of our Cloud OS vision both within our datacenters – and using the Windows Azure Pack within both your existing datacenters and those of our partners. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Creating Descriptive Flex Field (DFF) Bean in OAF

    - by Manoj Madhusoodanan
    In this blog I will explain how to add a custom DFF in a custom OAF page.I am using XXCUST_DFF_DEMO table to store the DFF values.Also I am using custom DFF named XXCUST_PERSON_DFF.  Following steps needs to be performed to create this solution. 1) Register the custom table in Oracle Application2) Register the DFF3) Define the segments of DFF4) Create BC4J components for OAF and OA Page which holds the DFF I will explain the steps in detail below. Register the custom table in Oracle Application I am using custom DFF here so I have to register the custom table which I am going to capture the values.Please click here to see the table script. I am using the AD_DD package to register the custom table.Please click here to see the table registration script. Please verify the table has registered successfully. Navigation: Application Developer > Application > Database > Table Table has registered successfully. Register the DFF Next step is to register the DFF. Navigate to Application Developer > Flex Field > Descriptive > Register. Give details as below. Click on Reference Fields and set the Reference Field as ATTRIBUTE_CATEGORY. Click on the Columns button to verify that the columns ATTRIBUTE_CATEGORY,ATTRIBUTE1 .... ATTRIBUTE30 are enabled. DFF has registered successfully. Define the segments of DFF Here I am going to define the segments of the DFF.Navigate to Application Developer > Flex Field > Descriptive > Segments.Query for "XXCUST - Person DFF". Uncheck "Freeze Flexfield Definition". In my DFF the reference field I want to display a value set which has values "Permanent" and "Contractor". So define a value set  XXCUST_EMPLOYMENT_TYPE. Navigation: Application Developer > Flex Field > Descriptive > Validation > Sets After that assign the values to above created value sets. Navigation: Application Developer > Flex Field > Descriptive > Validation > Values Assign XXCUST_EMPLOYMENT_TYPE to Context Field Valueset. Setup the Context Field Values based on below table. Context Code Segments Global Data Elements Phone Number Email Fax Contractor Manager Extension Number CSP Name Permanent Extension Number Access Card Number Phone Number,Email and Fax displays always.When user choose Context Value as "Contractor" Manager Extension Number and CSP Name will show.In case of "Permanent" Extension Number and Access Card Number will show.  Assign value set also as follows. For Global Data Elements following are the segments. For "Contractor" following are the segments. For "Permanent" following are the segments. Check the "Freeze Flexfield Definition" check box and save.Standard concurrent program "Flexfield View Generator" will generate XXCUST_DFF_DEMO_DFV view which we mentioned in the DFF registration step.  Now the DFF has created successfully and ready to use. Create BC4J components for OAF and OA Page which holds the DFF Create the BC4J components ( EO,VO and AM) appropriately.Create the page based on the created VO.For DFF create an item of type "flex" with following property.  Note: You cannot create a flex item directly under a messageComponentLayout region, but you can create a messageLayout region under the messageComponentLayout region and add the flex item under the messageLayout region. In the Segment List property give the segment names which you want to display.The syntax of this is Global Data Elements|SEGMENT 1|...|SEGMENT N||[Context Code1]|SEGMENT 1|...|SEGMENT N||[Context Code2]|SEGMENT 1|...|SEGMENT N||... Eg: Global Data Elements|Phone Number|Email|Fax||Contractor|Manager Extension Number|CSP Name||Permanent|Extension Number|Access Card Number When you change the Context Value corresponding segments will display automatically by PPR in the page. You can attach partial action to the DFF bean programmatically so that you can identify the action related to DFF. pageContext.getParameter(EVENT_PARAM) will return "FLEX_CONTEXT_CHANGEDPersonDFF" when you change the DFF Context. Page is ready and you can test. When you choose "Contract" following output you can see. When you choose "Permanent" following output you can see.  Give proper values and press Apply.You can see values populated in the table.

    Read the article

  • New MySQL Cluster 7.3 Previews: Foreign Keys, NoSQL Node.js API and Auto-Tuned Clusters

    - by Mat Keep
    At this weeks MySQL Connect conference, Oracle previewed an exciting new wave of developments for MySQL Cluster, further extending its simplicity and flexibility by expanding the range of use-cases, adding new NoSQL options, and automating configuration. What’s new: Development Release 1: MySQL Cluster 7.3 with Foreign Keys Early Access “Labs” Preview: MySQL Cluster NoSQL API for Node.js Early Access “Labs” Preview: MySQL Cluster GUI-Based Auto-Installer In this blog, I'll introduce you to the features being previewed. Review the blogs listed below for more detail on each of the specific features discussed. Save the date!: A live webinar is scheduled for Thursday 25th October at 0900 Pacific Time / 1600UTC where we will discuss each of these enhancements in more detail. Registration will be open soon and published to the MySQL webinars page MySQL Cluster 7.3: Development Release 1 The first MySQL Cluster 7.3 Development Milestone Release (DMR) previews Foreign Keys, bringing powerful new functionality to MySQL Cluster while eliminating development complexity. Foreign Key support has been one of the most requested enhancements to MySQL Cluster – enabling users to simplify their data models and application logic – while extending the range of use-cases for both custom projects requiring referential integrity and packaged applications, such as eCommerce, CRM, CMS, etc. Implementation The Foreign Key functionality is implemented directly within the MySQL Cluster data nodes, allowing any client API accessing the cluster to benefit from them – whether they are SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA, HTTP/REST or the new Node.js API - discussed later.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation.  This represents a further enhancement to MySQL Cluster's support for on0line schema changes, ie adding and dropping indexes, adding columns, etc.  Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  Getting Started with MySQL Cluster 7.3 DMR1: Users can download either the source or binary and evaluate the MySQL Cluster 7.3 DMR with Foreign Keys now! (Select the Development Release tab). MySQL Cluster NoSQL API for Node.js Node.js is hot! In a little over 3 years, it has become one of the most popular environments for developing next generation web, cloud, mobile and social applications. Bringing JavaScript from the browser to the server, the design goal of Node.js is to build new real-time applications supporting millions of client connections, serviced by a single CPU core. Making it simple to further extend the flexibility and power of Node.js to the database layer, we are previewing the Node.js Javascript API for MySQL Cluster as an Early Access release, available for download now from http://labs.mysql.com/. Select the following build: MySQL-Cluster-NoSQL-Connector-for-Node-js Alternatively, you can clone the project at the MySQL GitHub page.  Implemented as a module for the V8 engine, the new API provides Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. Figure 1: MySQL Cluster NoSQL API for Node.js enables end-to-end JavaScript development Rather than just presenting a simple interface to the database, the Node.js module integrates the MySQL Cluster native API library directly within the web application itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The new Node.js API joins a rich array of NoSQL interfaces available for MySQL Cluster. Whichever API is chosen for an application, SQL and NoSQL can be used concurrently across the same data set, providing the ultimate in developer flexibility.  Get started with MySQL Cluster NoSQL API for Node.js tutorial MySQL Cluster GUI-Based Auto-Installer Compatible with both MySQL Cluster 7.2 and 7.3, the Auto-Installer makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments – whether on-premise or in the cloud. Implemented with a standard HTML GUI and Python-based web server back-end, the Auto-Installer intelligently configures MySQL Cluster based on application requirements and auto-discovered hardware resources Figure 2: Automated Tuning and Configuration of MySQL Cluster Developed by the same engineering team responsible for the MySQL Cluster database, the installer provides standardized configurations that make it simple, quick and easy to build stable and high performance clustered environments. The auto-installer is previewed as an Early Access release, available for download now from http://labs.mysql.com/, by selecting the MySQL-Cluster-Auto-Installer build. You can read more about getting started with the MySQL Cluster auto-installer here. Watch the YouTube video for a demonstration of using the MySQL Cluster auto-installer Getting Started with MySQL Cluster If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3 and the Early Access previews). Or use the new MySQL Cluster Auto-Installer! Download the Guide to Scaling Web Databases with MySQL Cluster (to learn more about its architecture, design and ideal use-cases). Post any questions to the MySQL Cluster forum where our Engineering team and the MySQL Cluster community will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section here or in the blogs referenced in this article. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. The first Development Release of MySQL Cluster 7.3 and the Early Access previews give you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases, by using the comments of this blog.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >