Search Results

Search found 13635 results on 546 pages for 'domain policies'.

Page 160/546 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • How to install a new TFS checkin policy on a TFS 2010 server?

    - by rhart
    Hi, We've recently upgraded our TFS server to TFS 2010 from 2008. We've been researching a couple new add-on checkin policies we want to install. The only problem is that all documentation I can find on adding new policies to the server appears to be specific to TFS 2008 or earlier. Those steps involve adding new keys in the registry which do not exist on our 2010 TFS server. Does anybody know where the process to install new checkin policies on a TFS 2010 server so they can be applied to Team Projects is documented? Thanks!

    Read the article

  • KVM Guest installed from console. But how to get to the guest's console?

    - by badbishop
    I'm trying to install a fully virtualized guest (Fedora 14 x86_64) on KVM (RHEL 6), using command-line only (both hypervisor and guest). It goes without errors, and without a tangible result . I'd like to know how to do a text-only installation. So, here's what I've done: # virt-install \ --name=FE --ram=756 --vcpus=1 \ --file=/var/lib/libvirt/images/FE.img --network bridge:br0 \ --nographics --os-type=linux \ --extra-args='console=tty0' -v \ --cdrom=/media/usb/Fedora-14-x86_64-Live-Desktop.iso Starting install... Creating domain... | 0 B 00:00 Connected to domain FE Escape character is ^] ÿ Now what? As I understand after googling for a couple of days, I should see the guest's output from the text installation, but nothing happens. virt-viewer cannot connect to it, kindly suggesting that I explore all the options by adding --help (which I did). If I reconnect with virsh, I see this: Domain installation still in progress. You can reconnect to the console to complete the installation process. [root@v ~] # virsh console FEConnected to domain FE Escape character is ^] This shows that VM is running # virsh list Id Name State ---------------------------------- 8 FE running Qemu log: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 756 -smp 1,sockets=1,cores=1,threads=1 -name FE -uuid 6989d008-7c89-424c-d2d3-f41235c57a18 -nographic -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/FE.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvirt/images/FE.img,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/media/usb/Fedora-14-x86_64-Live-Desktop.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0a:65:8d,bus=pci.0,addr=0x2 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 char device redirected to /dev/pts/1 Output of /etc/libvirt/qemu/FE.xml # cat /etc/libvirt/qemu/FE.xml <domain type='kvm'> <name>FE</name> <uuid>6989d008-7c89-424c-d2d3-f41235c57a18</uuid> <memory>774144</memory> <currentMemory>774144</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FE.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0a:65:8d'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memballoon> </devices> </domain> I'm obviously missing something that many others don't, but what is it? Thanx in advance!

    Read the article

  • Bind: dns not 'spreaded'

    - by realtebo
    I've elfoip.net with bind $ whois elfoip.net | grep 'Name Server' Name Server: NS.ELFOIP.NET I need elfoip.net be able to serve third levels domain, like mickymouse.elfoip.net, etc... Yes, I'm trying to create an other useless dyndns clone. i've added some third level as A RR. Eg: executing this from the server itself $ dig @localhost mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 I was expecting in one or two days, from my pc i can digit in browser mattinauno.elfoip.net and get page a 192.81.221.113 But this is not happening. Are there any prerequisites to satisfy to allow dns of my isp to be able to forward dns resolution of *.elfoip.net to MY dns ? (Or to ask to him and then cache ?) TTL of zone is set a 5m I've not AllowQuey directive, is it necessary for other dns to cache from mine ? I've cheched the zone with bind utility named-checkzone but no error detected. How to diagnose why other dns doesn't take in account RR from mine ? from my home pc dig @ns.elfoip.net mattinauno.elfoip.net ;; ANSWER SECTION: mattinauno.elfoip.net. 60 IN A 192.81.221.113 ;; AUTHORITY SECTION: elfoip.net. 300 IN NS ns.elfoip.net. but dig @8.8.8.8 mattinauno.elfoip.net give no answers Whole zone file: note I've used nsupdate, so this file has been re-edited and re-formatted from this utility ! root@mirko:/var/named# cat elfoip.net.db $ORIGIN . $TTL 300 ; 5 minutes elfoip.net IN SOA ns.elfoip.net. hostmaster.elfoip.net. ( 2013062314 ; serial 3600 ; refresh (1 hour) 600 ; retry (10 minutes) 86400 ; expire (1 day) 60 ; minimum (1 minute) ) NS ns.elfoip.net. A 109.168.99.6 $ORIGIN elfoip.net. $TTL 60 ; 1 minute google A 173.194.35.56 maiscai A 192.81.221.113 mattinadue A 192.81.221.113 mattinauno A 192.81.221.113 $TTL 300 ; 5 minutes ns A 109.168.99.6 $TTL 60 ; 1 minute prova A 208.67.222.222 prova2 A 13.23.34.45 A 13.23.34.46 www CNAME elfoip.net. EDIT: added named.conf.local zone "elfoip.net" { type master; // file "/etc/bind/elfoip.net.db"; file "/var/named/elfoip.net.db"; allow-update { key elfoip.net ; }; }; EDIT: I've no setup list-on directive *EDIT Added a TCPDUMP after [email protected] wwww.elfoip.net from a machine which uses my company internal dns, who allow recursive query. root@mirko:~# tcpdump -i eth0 'port 53' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 11:57:23.293611 IP host9-210-static.22-87-b.business.telecomitalia.it.45958 > mirko.elfoip.net.domain: 61337+ A? www.elfoip.net. (32) 11:57:23.294114 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.45958: 61337* 2/1/1 CNAME elfoip.net., A 109.168.99.6 (95) 11:57:23.294554 IP mirko.elfoip.net.59571 > google-public-dns-a.google.com.domain: 45851+ PTR? 9.210.22.87.in-addr.arpa. (42) 11:57:23.330444 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.59571: 45851 1/0/0 PTR host9-210-static.22-87-b.business.telecomitalia.it. (106) 11:57:23.331181 IP mirko.elfoip.net.44171 > google-public-dns-a.google.com.domain: 33339+ PTR? 8.8.8.8.in-addr.arpa. (38) 11:57:23.439405 IP google-public-dns-a.google.com.domain > mirko.elfoip.net.44171: 33339 1/0/0 PTR google-public-dns-a.google.com. (82) 11:57:31.350654 IP host9-210-static.22-87-b.business.telecomitalia.it.30108 > mirko.elfoip.net.domain: 38269 [1au] A? ns.elfoip.net. (42) 11:57:31.351117 IP mirko.elfoip.net.domain > host9-210-static.22-87-b.business.telecomitalia.it.30108: 38269* 1/1/1 A 109.168.99.6 (72) If i dig @8.8.8.8 www.elfoip.net, NOTHING happens in dump log !

    Read the article

  • Applocker custom extension (Java, CPL, MSC etc.)

    - by test1839
    We have a Terminal server and want to prevent users from running inappropriate software. Previously we used Software Restriction Policies for this purpose. Now, Microsoft seems to recommend Applocker instead. However we found no possibilities to add custom extensions like JAR, CPL, MSC etc. which was possible in Software Restriction Policies. Do you know how to add custom extensions to the Applocker policies in Windows 2008? Or how can we block custom script interpreters like Perl etc.?

    Read the article

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

  • NAT Policy Inbound Source Problem on SonicWall TZ-210 with Multiple DSL Lines

    - by HK1
    We recently added three more DSL connections to our SonicWall TZ-210. My NAT Policies work fine as long as I leave them set with an inbound interface of X1, which hosts our original DSL connection. However, I'd like to change some of the NAT Policies to use inbound source/interface X2, X3, X4 or Any. In my initial tests, when I change one of the policies to use an inbound interface of X2, that port forward policy does not work at all. Traffic never makes it to the internal destination. What could be the problem?

    Read the article

  • Is there a way in IE9 on a Virtual Machine to do AD auth in IE9 without the machine being added to the domain but the host machine is?

    - by Micah Armantrout
    I have a virtual machine that is running IE 9 and windows 7 Latest Updates that I want to use to test my intranet site (ASP.Net Application). I can't add the virtual machine to the domain and I don't want to have to type my ad cruds everytime I load the site up. Is there a way for the IE on the virtualbox to Authenticate as my AD Cruds on the host machine so I don't have to always put my username and password in ? I guess I can just have IE on the virtual machine remember my username and password but other than that is there another way to do this ?

    Read the article

  • Restrict only some plugins to specific sites in Google Chrome

    - by Christian
    I am looking for a way to set up Google Chrome so that it will run a certain plug-in (Java, what else?) only on whitelisted sites, but other plug-ins (like the PDF viewer) everywhere. From playing with the policies available for Chrome, I think there are basically two levels of plug-in management: List of disabled plugins/enabled plugins: Controls whether a plug-in exists for the browser at all This pair of policies applies to plug-ins, but not to sites. Default plug-in settings/Allow plug-ins on sites: Controls on which sites plug-ins can run This set of policies applies to sites, but not to individual plugins, and it cannot override the first pair. There appears to be no way to configure Chrome so that some plug-ins only run on whitelisted sites, but others run everywhere by default. I have also looked at filtering content on the firewall/proxy level, but I'm not convinced it can be done securely there. Filtering by URLs (file names) or content types can be circumvented trivially, and identification by content inspection cannot be safe either.

    Read the article

  • How to install a new TFS checkin policy on a TFS 2010 server?

    - by rayrayrayraydog
    We've recently upgraded our TFS server to TFS 2010 from 2008. We've been researching a couple new add-on checkin policies we want to install. The only problem is that all documentation I can find on adding new policies to the server appears to be specific to TFS 2008 or earlier. Those steps involve adding new keys in the registry which do not exist on our 2010 TFS server. Does anybody know where the process to install new checkin policies on a TFS 2010 server so they can be applied to Team Projects is documented? Thanks!

    Read the article

  • Unofficial Prep guide for TS: Microsoft Lync Server 2010, Configuring (70-664)

    - by Enrique Lima
    Managing Users and Client Access (20 percent)   Objective Materials Configure user accounts http://technet.microsoft.com/en-us/library/gg182543.aspx Deploy and maintain clients http://technet.microsoft.com/en-us/library/gg412773.aspx Configure conferencing policies http://technet.microsoft.com/en-us/library/gg182561.aspx Configure IM policies http://technet.microsoft.com/en-us/library/gg182558.aspx Deploy and maintain Lync Server 2010 devices http://technet.microsoft.com/en-us/library/gg412773.aspx Resolve client access issues http://technet.microsoft.com/en-us/library/gg398307.aspx   Configuring a Lync Server 2010 Topology (21 percent)   Objective Materials Prepare to deploy a topology http://technet.microsoft.com/en-us/library/gg398630.aspx Configure Lync Server 2010 by using Topology Builder http://technet.microsoft.com/en-us/library/gg398420.aspx Configure role-based access control in Lync Server 2010 http://technet.microsoft.com/en-us/library/gg412794.aspx http://technet.microsoft.com/en-us/library/gg425917.aspx Configure a location information server http://technet.microsoft.com/en-us/library/gg398390.aspx Configure server pools for load balancing http://technet.microsoft.com/en-us/library/gg398827.aspx   Configuring Enterprise Voice (19 percent)   Objective Materials Configure voice policies http://technet.microsoft.com/en-us/library/gg398450.aspx Configure dial plans http://technet.microsoft.com/en-us/library/gg398922.aspx Manage routing http://technet.microsoft.com/en-us/library/gg425890.aspx http://technet.microsoft.com/en-us/library/gg182596.aspx Configure Microsoft Exchange Unified Messaging integration http://technet.microsoft.com/en-us/library/gg398768.aspx Configure dial-in conferencing http://technet.microsoft.com/en-us/library/gg398600.aspx Configure call admission control http://technet.microsoft.com/en-us/library/gg520942.aspx Configure Response Group Services (RGS) http://technet.microsoft.com/en-us/library/gg398584.aspx Configure Call Park and Unassigned Number http://technet.microsoft.com/en-us/library/gg399014.aspx http://technet.microsoft.com/en-us/library/gg425944.aspx Manage a Mediation Server pool and PSTN Gateway http://technet.microsoft.com/en-us/library/gg412780.aspx   Configuring Lync Server 2010 for External Access (19 percent)   Objective Materials Configure Edge Services http://technet.microsoft.com/en-us/library/gg398918.aspx Configure a firewall http://technet.microsoft.com/en-us/library/gg425882.aspx Configure a reverse proxy http://technet.microsoft.com/en-us/library/gg425779.aspx   Monitoring and Maintaining Lync Server 2010 (21 percent)   Objective Materials Back up and restore Lync Server 2010 http://technet.microsoft.com/en-us/library/gg412771.aspx Configure monitoring and archiving http://technet.microsoft.com/en-us/library/gg398199.aspx http://technet.microsoft.com/en-us/library/gg398507.aspx http://technet.microsoft.com/en-us/library/gg520950.aspx http://technet.microsoft.com/en-us/library/gg520990.aspx Implement troubleshooting tools http://technet.microsoft.com/en-us/library/gg425800.aspx Use PowerShell to test Lync Server 2010 http://technet.microsoft.com/en-us/library/gg398474.aspx

    Read the article

  • How to deal with transport level security policy with OSB

    - by Jian Liang
    Recently, we received a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy. The WSDL of the remote web service looks like following where the part marked in red shows the security policy: <?xml version='1.0' encoding='UTF-8'?> <definitions xmlns:wssutil="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="https://httpsbasicauth" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="https://httpsbasicauth" name="HttpsBasicAuthService"> <wsp:UsingPolicy wssutil:Required="true"/> <wsp:Policy wssutil:Id="WSHttpBinding_IPartyServicePortType_policy"> <wsp:ExactlyOne> <wsp:All> <ns1:TransportBinding xmlns:ns1="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy"> <wsp:Policy> <ns1:TransportToken> <wsp:Policy> <ns1:HttpsToken RequireClientCertificate="false"/> </wsp:Policy> </ns1:TransportToken> <ns1:AlgorithmSuite> <wsp:Policy> <ns1:Basic256/> </wsp:Policy> </ns1:AlgorithmSuite> <ns1:Layout> <wsp:Policy> <ns1:Strict/> </wsp:Policy> </ns1:Layout> </wsp:Policy> </ns1:TransportBinding> <ns2:UsingAddressing xmlns:ns2="http://www.w3.org/2006/05/addressing/wsdl"/> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="https://proxyhttpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=1"/> </xsd:schema> <xsd:schema> <xsd:import namespace="https://httpsbasicauth" schemaLocation="http://localhost:7001/WS/HttpsBasicAuthService?xsd=2"/> </xsd:schema> </types> <message name="echoString"> <part name="parameters" element="tns:echoString"/> </message> <message name="echoStringResponse"> <part name="parameters" element="tns:echoStringResponse"/> </message> <portType name="HttpsBasicAuth"> <operation name="echoString"> <input message="tns:echoString"/> <output message="tns:echoStringResponse"/> </operation> </portType> <binding name="HttpsBasicAuthSoapPortBinding" type="tns:HttpsBasicAuth"> <wsp:PolicyReference URI="#WSHttpBinding_IPartyServicePortType_policy"/> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document"/> <operation name="echoString"> <soap:operation soapAction=""/> <input> <soap:body use="literal"/> </input> <output> <soap:body use="literal"/> </output> </operation> </binding> <service name="HttpsBasicAuthService"> <port name="HttpsBasicAuthSoapPort" binding="tns:HttpsBasicAuthSoapPortBinding"> <soap:address location="https://localhost:7002/WS/HttpsBasicAuthService"/> </port> </service> </definitions> The security assertion in the WSDL (marked in red) indicates that this is the HTTP transport level security policy which requires one way SSL with default authentication (aka. basic authenticate with username/password). Normally, there are two ways to handle web service security policy with OSB 11g: Use WebLogic 9.x policy Use OWSM Since OSB doesn’t support WebLogic 9.x WSSP transport level assertion (except for WS transport), when we tried to create the business service based on the imported WSDL, OSB complained with the following message: [OSB Kernel:398133]The service is based on WSDL with Web Services Security Policies that are not natively supported by Oracle Service Bus. Please select OWSM Policies - From OWSM Policy Store option and attach equivalent OWSM security policy. For the Business Service, either you can add the necessary client policies manually by clicking Add button or you can let Oracle Service Bus automatically pick and add compatible client policies by clicking Add Compatible button. Unfortunately, when tried with OWSM, we couldn’t find http_token_policy from OWSM since OSB PS4 doesn’t support OWSM http_token_policy. It seems that we ran into an unsupported situation that no appropriate policy can be used from both WebLogic and OWSM. As this security policy requires one way SSL with basic authentication at the transport level, a possible workaround is to meet the remote service's requirement at transport level without using web service policy. We can simply use OSB to establish SSL connection and provide username/password for authentication at the transport level to the remote web service. In this case, the business service within OSB will be transparent to the web service policy. However, we still need to deal with OSB console’s complaint related to unsupported security policy because the failure of WSDL validation prohibits OSB console to move forward. With the help from OSB Product Management team, we finally came up with the following solutions: Solution 1: OSB PS5 The good news is that the http_token_policy is made available in OSB PS5. With OSB PS5, you can simply add OWSM oracle/wss_http_token_over_ssl_client_policy to the business service. The simplest solution is to upgrade to OSB PS5 where the OWSM solution is provided out of the box. But if you are not in a position where upgrading is an immediate option, you might want to consider other two workaround solutions described below. Solution 2: Modifying WSDL This solution addresses OSB console’s complaint by removing the security policy from the imported WSDL within OSB. Without the security policy, OSB console allows the business service to be created based on modified WSDL.  Please bear in mind, modifying WSDL is done only for the OSB side via OSB console, no change is required on the remote Web Service. The main steps of this solution: Connect to OSB console import the remote WSDL into OSB remove security assertion (the red marked part) from the imported WSDL create a service account. In our sample, we simply take the user weblogic create the business service and check "Basic" for Authentication and select the created service account make sure that OSB consumes the web service via https. This solution requires modifying WSDL. It is suitable for any OSB version (10g or OSB 11g version) prior to PS5 without OWSM. However, modifying WSDL by hand is troublesome as it requires the user to remember that the original WSDL was edited.  It forces you to make the same edit each time you want to re-import the service WSDL when changes occur at the service level. This also prevents you from using UDDI to import WSDL.  Solution 3: Using original WSDL This solution keeps the WSDL intact and ignores the embedded policy by using OWSM. By design, OWSM doesn’t like WSDL with embedded security assertion. Since OWSM doesn’t provide the feature to explicitly ignore the embedded policy from a remote WSDL, in this solution, we use OWSM in a tricky way to ignore the embedded policy. Connect to OSB console import the remote WSDL into OSB create a service account create the business service in which check "Basic" for Authentication and select the created service account as the imported WSDL is intact, the OSB Kernel:398133 error is expected ignore this error message for the moment and navigate to the Policies Page of business service Select “From OWSM Policy Store” and click “Add” button, the list of policies will pop-up Here is the tricky part: select an arbitrary policy, and click “Cancel” Update and save By clicking “Cancel’ button, we didn’t add any OWSM policy to business service, but the embedded policy is ignored. Yes, this is tricky. According to Oracle OSB Product Manager, the future release of OWSM will add a button “None” which allows to ignore the embedded policy explicitly. This solution keeps the imported WSDL intact which is the big advantage over the solution 2. It is suitable for OSB 11g (version prior to PS5) domain with OWSM configured. This blog addressed the unsupported transport level web service security policy with OSB PS4. To summarize, if you are using OSB PS5 or in a position to upgrade to PS5, the recommendation is to use OWSM OOTB transport level security policy directly. With the release prior to 11g PS5, you can consider the solution 2 or 3 depending on if OWSM is configured.

    Read the article

  • Permission based Authorization vs. Role based Authorization - Best Practices - 11g

    - by Prakash Yamuna
    In previous blog posts here and here I have alluded to the support in OWSM for Permission based authorization and Role based authorization support. Recently I was having a conversation with an internal team in Oracle looking to use OWSM for their Web Services security needs and one of the topics was around - When to use permission based authorization vs. role based authorization? As in most scenarios the answer is it depends! There are trade-offs involved in using the two approaches and you need to understand the trade-offs and you need to understand which trade-offs are better for your scenario. Role based Authorization: Simple to use. Just create a new custom OWSM policy and specify the role in the policy (using EM Fusion Middleware Control). Inconsistent if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) - ex: the model for securing EJBs with roles or the model for securing Web App roles - is inconsistent. Since the model is inconsistent, tooling is also fairly inconsistent. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating OWSM custom policies. Permission based Authorization: More complex. You need to attach both an OWSM policy and create OPSS Permission authorization policies. (Note: OWSM leverages OPSS Permission based Authorization support). More appropriate if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) and want a consistent authorization model. Consistent Tooling for managing authorization across different resources (ex: EM Fusion Middleware Control). Better Lifecycle support in terms of T2P, etc. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating/editing OPSS Permission based authorization policies.

    Read the article

  • Disable Password Complexity/Expiration etc. Policy on Windows Server 2008

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). One of the things I like to do, for development environments only is to get rid of that excessively bothersome password policies. I like to have my password as something like p@ssword1, so they are easy to remember etc. etc. Obviously never do this in production. However, Windows Server 2008 comes with a password policy that expires my passwords every 90 days, and requires me to pick complex passwords, can’t reuse passwords etc. etc. Well here is how you disable password policy on a Windows Server 2008 machine - Run Group Policy Management (gpmc.msc) Expand to your domain, look for Forest\Domains\yourdomain\default domain policy. Go to the settings tab, right click on the tab, and choose “Edit”. This will open the Group Policy Management Editor, in which - Go to Computer Configuration\Policies\Windows Settings\Security Settings\Account Policies\Password Policy, and change the policy to whatever that suits you. Close everything, and run command prompt as administrator, and issue a “gpupdate /force” command to force the group policy update on the machine. Restart, and you’re done! :) Comment on the article ....

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-27

    - by Bob Rhubart
    Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book," says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients | Andre Correa "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace sections. Thought for the Day "A complex system that works is invariably found to have evolved from a simple system that worked." — John Gall Source: SoftwareQuotes.com

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Why won't fetchmail work all of a sudden?

    - by SirCharlo
    I ran a chmod 777 * on my home folder. (I know, I know. I'll never do it again.) Ever since then, fetchmail seems to be broken. I use it to fetch mail from an Exchange 2003 mailbox through DAVMail and OWA. The problem is that fetchmail complains about an "expunge mismatch" whenever I get a new message. It deletes the message from the Exchange mailbox, yet it never forwards it. There seems to be a problem somwhere along the mail processing, but I haven't been able to pinpoint where. Any help would be appreciated. Here are the relevant config files. ~/fetchmailrc: set no bouncemail defaults: antispam -1 batchlimit 100 poll localhost with protocol imap and port 1143 user domain\\user password Password is root no rewrite mda "/usr/bin/procmail -f %F -d %T"; ~/procmailrc: :0 * ^Subject.*ack | expand | sed -e 's/[ ]*$//g' | sed -e 's/^/ /' > /usr/local/nagios/libexec/mail_acknowledgement ~/.forward: | "/usr/bin/procmail" And here is the output when I run fetchmail -f /root/.fetchmailrc -vv: fetchmail: WARNING: Running as root is discouraged. Old UID list from localhost: <empty> Scratch list of UIDs: <empty> fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll started Trying to connect to 127.0.0.1/1143...connected. fetchmail: IMAP< * OK [CAPABILITY IMAP4REV1 AUTH=LOGIN] IMAP4rev1 DavMail 3.9.7-1870 server ready fetchmail: IMAP> A0001 CAPABILITY fetchmail: IMAP< * CAPABILITY IMAP4REV1 AUTH=LOGIN fetchmail: IMAP< A0001 OK CAPABILITY completed fetchmail: Protocol identified as IMAP4 rev 1 fetchmail: GSSAPI error gss_inquire_cred: Unspecified GSS failure. Minor code may provide more information fetchmail: GSSAPI error gss_inquire_cred: fetchmail: No suitable GSSAPI credentials found. Skipping GSSAPI authentication. fetchmail: If you want to use GSSAPI, you need credentials first, possibly from kinit. fetchmail: IMAP> A0002 LOGIN "domain\\user" * fetchmail: IMAP< A0002 OK Authenticated fetchmail: selecting or re-polling default folder fetchmail: IMAP> A0003 SELECT "INBOX" fetchmail: IMAP< * 1 EXISTS fetchmail: IMAP< * 1 RECENT fetchmail: IMAP< * OK [UIDVALIDITY 1] fetchmail: IMAP< * OK [UIDNEXT 344] fetchmail: IMAP< * FLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk) fetchmail: IMAP< * OK [PERMANENTFLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk)] fetchmail: IMAP< A0003 OK [READ-WRITE] SELECT completed fetchmail: 1 message waiting after first poll fetchmail: IMAP> A0004 EXPUNGE fetchmail: IMAP< A0004 OK EXPUNGE completed fetchmail: 1 message waiting after expunge fetchmail: IMAP> A0005 SEARCH UNSEEN fetchmail: IMAP< * SEARCH 1 fetchmail: 1 is unseen fetchmail: IMAP< A0005 OK SEARCH completed fetchmail: 1 is first unseen 1 message for domain\user at localhost. fetchmail: IMAP> A0006 FETCH 1 RFC822.SIZE fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.SIZE 1350) fetchmail: IMAP< A0006 OK FETCH completed fetchmail: IMAP> A0007 FETCH 1 RFC822.HEADER fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.HEADER {1350} reading message domain\user@localhost:1 of 1 (1350 header octets) fetchmail: about to deliver with: /usr/bin/procmail -f '[email protected]' -d 'root' # fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< Bonne journ=E9e.. fetchmail: IMAP< fetchmail: IMAP< Company Name fetchmail: IMAP< My Name fetchmail: IMAP< IT fetchmail: IMAP< Tel: (XXX) XXX-XXXX xXXX fetchmail: IMAP< www.domain.com=20 fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< -----Message d'origine----- fetchmail: IMAP< De=A0: User [mailto:[email protected]]=20 fetchmail: IMAP< Envoy=E9=A0: 2 juillet 2012 15:50 fetchmail: IMAP< =C0=A0: Informatique fetchmail: IMAP< Objet=A0: PROBLEM: photo fetchmail: IMAP< fetchmail: IMAP< Notification Type: PROBLEM fetchmail: IMAP< Author:=20 fetchmail: IMAP< Comment:=20 fetchmail: IMAP< fetchmail: IMAP< Host: Photos fetchmail: IMAP< Hostname: photo fetchmail: IMAP< State: DOWN fetchmail: IMAP< Address: XXX.XX.X.XX fetchmail: IMAP< fetchmail: IMAP< Date/Time: Mon Jul 2 15:49:38 EDT 2012 fetchmail: IMAP< fetchmail: IMAP< Info: CRITICAL - XXX.XX.X.XX: rta nan, lost 100% fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< ) fetchmail: IMAP< A0007 OK FETCH completed fetchmail: IMAP> A0008 FETCH 1 BODY.PEEK[TEXT] fetchmail: IMAP< * 1 FETCH (UID 343 BODY[TEXT] {539} (539 body octets) ******************************* fetchmail: IMAP< ) fetchmail: IMAP< A0008 OK FETCH completed flushed fetchmail: IMAP> A0009 STORE 1 +FLAGS (\Seen \Deleted) fetchmail: IMAP< * 1 FETCH (UID 343 FLAGS (\Seen \Deleted)) fetchmail: IMAP< * 1 EXPUNGE fetchmail: IMAP< A0009 OK STORE completed fetchmail: IMAP> A0010 EXPUNGE fetchmail: IMAP< A0010 OK EXPUNGE completed fetchmail: mail expunge mismatch (0 actual != 1 expected) fetchmail: IMAP> A0011 LOGOUT fetchmail: IMAP< * BYE Closing connection fetchmail: IMAP< A0011 OK LOGOUT completed fetchmail: client/server synchronization error while fetching from domain\user@localhost fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll completed Merged UID list from localhost: <empty> fetchmail: Query status=7 (ERROR) fetchmail: normal termination, status 7

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • mod_pagespeed is rewriting but not combining

    - by Marc vd M
    I have the following problem. I installed mod_pagespeed but i am not getting the results i want! It does rewrite my css and changes the to the cache url but its not combining the css files. I have seached the web and stackoverflow for it but did not find a solution. Here are the tags <link media="all" type="text/css" href="http://domain.com/assets/css/bootstrap.min.css.pagespeed.ce.Iz3TwZXylG.css" rel="stylesheet"> <link media="all" type="text/css" href="http://domain.com/assets/css/W.jquery-ui-1.8.24.custom.css.pagespeed.cf.9yjmvb9yjz.css" rel="stylesheet"> <link media="all" type="text/css" href="http://domain.com/assets/css/W.bootstrap.extend.css.pagespeed.cf.VelsS-YQRX.css" rel="stylesheet"> <link media="all" type="text/css" href="http://domain.com/assets/css/W.base.css.pagespeed.cf.QO1yNADs1p.css" rel="stylesheet"> <link media="all" type="text/css" href="http://domain.com/assets/css/W.style.css.pagespeed.cf.tRzIhRbblc.css" rel="stylesheet">

    Read the article

  • Searching for the last logon of users in Active Directory

    - by Robert May
    I needed to clean out a bunch of old accounts at Veracity Solutions, and wanted to delete those that hadn’t used their account in more than a year. I found that AD has a property on objects called the lastLogonTimestamp.  However, this value isn’t exposed to you in any useful fashion.  Sure, you can pull up ADSI Edit and and eventually get to it there, but it’s painful. I spent some time searching, and discovered that there’s not much out there to help, so I thought a blog post showing exactly how to get at this information would be in order. Basically, what you end up doing is using System.DirectoryServices to search for accounts and then filtering those for users, doing some conversion and such to make it happen.  Basically, the end result of this is that you get a list of users with their logon information and you can then do with that what you will.  I turned my list into an observable collection and bound it into a XAML form. One important note, you need to add a reference to ActiveDs Type Library in the COM section of the world in references to get to LargeInteger. Here’s the class: namespace Veracity.Utilities { using System; using System.Collections.Generic; using System.DirectoryServices; using ActiveDs; using log4net; /// <summary> /// Finds users inside of the active directory system. /// </summary> public class UserFinder { /// <summary> /// Creates the default logger /// </summary> private static readonly ILog log = LogManager.GetLogger(typeof(UserFinder)); /// <summary> /// Finds last logon information /// </summary> /// <param name="domain">The domain to search.</param> /// <param name="userName">The username for the query.</param> /// <param name="password">The password for the query.</param> /// <returns>A list of users with their last logon information.</returns> public IList<UserLoginInformation> GetLastLogonInformation(string domain, string userName, string password) { IList<UserLoginInformation> result = new List<UserLoginInformation>(); DirectoryEntry entry = new DirectoryEntry(domain, userName, password, AuthenticationTypes.Secure); DirectorySearcher directorySearcher = new DirectorySearcher(entry); directorySearcher.PropertyNamesOnly = true; directorySearcher.PropertiesToLoad.Add("name"); directorySearcher.PropertiesToLoad.Add("lastLogonTimeStamp"); SearchResultCollection searchResults; try { searchResults = directorySearcher.FindAll(); } catch (System.Exception ex) { log.Error("Failed to do a find all.", ex); throw; } try { foreach (SearchResult searchResult in searchResults) { DirectoryEntry resultEntry = searchResult.GetDirectoryEntry(); if (resultEntry.SchemaClassName == "user") { UserLoginInformation logon = new UserLoginInformation(); logon.Name = resultEntry.Name; PropertyValueCollection timeStampObject = resultEntry.Properties["lastLogonTimeStamp"]; if (timeStampObject.Count > 0) { IADsLargeInteger logonTimeStamp = (IADsLargeInteger)timeStampObject[0]; long lastLogon = (long)((uint)logonTimeStamp.LowPart + (((long)logonTimeStamp.HighPart) << 32)); logon.LastLogonTime = DateTime.FromFileTime(lastLogon); } result.Add(logon); } } } catch (System.Exception ex) { log.Error("Failed to iterate search results.", ex); throw; } return result; } } } Some important things to note: Username and Password can be set to null and if your computer us part of the domain, this may still work. Domain should be set to something like LDAP://servername/CN=Users,CN=Domain,CN=com You’re actually getting a com object back, so that’s why the LongInteger conversions are happening.  The class for UserLoginInformation looks like this:   namespace Veracity.Utilities { using System; /// <summary> /// Represents user login information. /// </summary> public class UserLoginInformation { /// <summary> /// Gets or sets Name /// </summary> public string Name { get; set; } /// <summary> /// Gets or sets LastLogonTime /// </summary> public DateTime LastLogonTime { get; set; } /// <summary> /// Gets the age of the account. /// </summary> public TimeSpan AccountAge { get { TimeSpan result = TimeSpan.Zero; if (this.LastLogonTime != DateTime.MinValue) { result = DateTime.Now.Subtract(this.LastLogonTime); } return result; } } } } I hope this is useful and instructive. Technorati Tags: Active Directory

    Read the article

  • Ext JS how to tell PagingToolbar to use parent Grid storage?

    - by Nazariy
    I'm trying to build application that use single config passed by server as non native JSON (can contain functions). Everything works fine so far but I'm curious why PagingToolbar does not have an option to use parent Grid store? I have tried to set store in my config like this, but without success: {... store:Ext.StoreMgr.lookup('unique_store_id') } Is there any way to do so without writing tons of javascript for each view defining store, grid and other items in my application or at least extend functionality of PaginationToolbar that use options from parent object? UPDATED, Here is short example of server response (minified) { "xtype":"viewport", "layout":"border", "renderTo":Ext.getBody(), "autoShow":true, "id":"mainFrame", "defaults":{"split":true,"useSplitTips":true}, "items":[ {"region":"center", "xtype":"panel", "layout":"fit", "id":"content-area", "items":{ "id":"manager-panel", "region":"center", "xtype":"tabpanel", "activeItem":0, "items":[ { "xtype":"grid", "id":"domain-grid", "title":"Manage Domains", "store":{ "xtype":"arraystore", "id":"domain-store", "fields":[...], "autoLoad":{"params":{"controller":"domain","view":"store"}}, "url":"index.php" }, "tbar":[...], "bbar":{ "xtype":"paging", "id":"domain-paging-toolbar", "store":Ext.StoreMgr.lookup('domain-store') }, "columns":[...], "selModel":new Ext.grid.RowSelectionModel({singleSelect:true}), "stripeRows":true, "height":350, "loadMask":true, "listeners":{ "cellclick":activateDisabledButtons } } ] }, } ] }

    Read the article

  • ASP.NET MVC2 Data Access Layer

    - by Paul
    For a small/medium sized project I'm trying to figure out what is the 'ideal' way to have a domain layer and data access layer. My opinions on coupling tend to be more towards the view that the domain models should not be tightly coupled with the database layer, in other words the data access layer shouldn't actually know anything about the domain objects. I've been looking at Linq-to-sql and it wants to use its own models that it creates, and so it ends up VERY tightly coupled. Whilst I love the way you use linq-to-sql in code I really don't like the way it wants to make its own domain objects. What are some alternatives that I should consider? I tried use NHibernate but I did not like the way I had to use to query and get different objects. I honestly love the syntax and way you use linq, I just don't want it to be so tightly coupled to domain objects.

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

  • Determining the hostname/IP address from the MX record in PHP

    - by pmmenneg
    Hi there, have a basic email domain validation script that takes a user's email domain, resolves the IP address from that and then checks that against various published blacklists. Here is how I am determining the IP: $domain = substr(strchr($email, '@'), 1); $ip = gethostbyname($domain); The problem is that some email address domains, such as [email protected], use an MX record rather than an A record, so using gethostbyname('alumni.example.net') will fail to resolve. I know when a user's email is using an MX in the email itself by using the PHP checkdnsrr function, but once at that stage am a little stuck as to how to proceed. In theory, I could parse out the 'root' domain, i.e. 'example.net' and check it, but I've not found reliable regex that can handle this task when the user could easily have an email the format of [email protected]... So, any suggestions on how to best tackle this??

    Read the article

  • Crossdomain TinyMCE

    - by pistacchio
    Hi, folling this discussion and this link, I learnt that by adding document.domain = 'mydomain.com'; to the tinyMCE initializer file and tiny_mce_popup.js i can overcome the cross domain problem. I haven't tested it on a proper production server, but in my dev environment the base domain is localhost:8000 and my static files (also tinyMCE ones) are on localhost:88. Adding document.domain = 'localhost:8000'; or document.domain = 'localhost:88'; doesn't solve the problem as I get the following error: Uncaught Error: SECURITY_ERR: DOM Exception 18 Any help? Thanks

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >