Search Results

Search found 4429 results on 178 pages for 'exchange transport agents'.

Page 165/178 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • Five Things Learned at the BSR Conference in San Francisco on Nov 2nd-4th

    - by Evelyn Neumayr
    The BSR Conference 2011—“Redefining Leadership”—held from Nov 2nd to Nov 4th in San Francisco, with Oracle as one of the main sponsors, saw senior business executives, civil society representatives, and other experts from around the world gathering to share strategies and insights on the future of sustainability. The general conference sessions kicked off on November 2nd with a plenary address by former U.S. Vice President Al Gore. Other sessions were presented by CEOs of the caliber of Carl Bass (Autodesk), Brian Dunn (Best Buy), Carlos Brito (Anheuser-Busch InBev) and Ofra Strauss (Strauss Group). Here are five key highlights from the conference: 1.      The main leadership challenge is integrating sustainability into core business functions and overcoming short-termism. The “BSR GlobeScan State of Sustainable Business Poll 2011” - a survey of nearly 500 business leaders from 300 member companies - shows that 84% of respondents are optimistic that global businesses will embrace CSR/sustainability as part of their core strategies and operations in the next five years but consider integrating sustainability into their core business functions the key challenge. It is still difficult for many companies that are committed to the sustainability agenda to find investors that understand the long-term implications and as Al Gore said “Many companies are given the signal by the investors that it is the short term results that matter and that is a terribly debilitating force in the market.” 2.      Companies are required to address increasing compliance requirements and transparency in their supply chain, especially in relation with conflict minerals legislation and water management. The Dodd-Frank legislation, OECD guidelines, and the upcoming Securities and Exchange Commission (SEC) rules require companies to monitor upstream the sourcing of tin, tantalum, tungsten, and gold, but given the complexity of this issue companies need to collaborate and partner with peer companies in their industry as well as in other industries to understand how to address conflict minerals in their supply chains. The Institute of Public and Environmental Affairs’ (IPE) China Water Pollution Map enables the public to access thousands of environmental quality, discharge, and infraction records released by various government agencies. Empowered with this information, the public has the opportunity to place greater pressure on polluting companies to comply with environmental standards and create solutions to improve their performance. 3.      A new standard for reporting on supply chain greenhouse gas emissions is available. The New “Scope 3” Supply Chain Greenhouse Gas Inventory Standard, released on October 4th 2011, is the only international greenhouse gas emissions standard that accounts for the full lifecycle of a company’s products. It provides a framework for companies to account for indirect emissions outside of energy use, such as transportation, manufacturing, and distribution, and it incorporates both upstream and downstream impacts of a product. With key investors now listing supplier vulnerability to rising energy prices and disruptions of service as a key concern, greenhouse gas (GHG) management isn’t just for leading companies but a necessity for any business. 4.      Environmental, social, and corporate governance (ESG) reporting is becoming increasingly important to investors and other stakeholders. While European investors have traditionally driven the ESG agenda, U.S. investors are increasingly including ESG data in their analyses. This trend will likely increase as stakeholders continue to demand that an ESG lens be applied to their investments. Investors are increasingly looking to partner on sustainability, as they see the benefits of ESG providing significant returns on investment. 5.      Software companies are offering an increasing variety of solutions to help drive changes and measure performance internally, in supply chains, and across peer companies. The significant challenge is how to integrate different software systems to facilitate decision-making based on a holistic understanding of trade-offs. Jon Chorley, Chief Sustainability Officer and Vice President, Supply Chain Management Product Strategy at Oracle was a panelist in the “Trends in Sustainability Software” session and commented that, “How we think about our business decisions really comes down to how we think about cost. And as long as we don’t assign a cost to things that have an environmental impact or social impact, then we make decisions based on incomplete information. If we could include that in the process that determines ‘Is this product profitable? we would then have a much better decision.” For more information on BSR visit www.brs.org. You can also view highlights of the plenary session at http://www.bsr.org/en/bsr-conference/session-summaries/2011. Oracle is proud to be a sponsor of this BSR conference. By Elena Avesani, Principal Product Strategy Manager, Oracle          

    Read the article

  • Postfix MySql Dovecot - SMTP Authentication Failure

    - by borncamp
    Hello I have a Postfix setup with Dovecot and MySql. The server is running Debian Squeeze. The MySql server is a slave that has data pushed to it from a primary (postfix) mail server(running a different os). The emails are stored on a replicated GlusterFS volume. I am able to check email using thunderbird over IMAP. However, SMTP requests fail. After turning on query logs for the MySql server I have noticed that no query statement is executed to retrieve the user information when an SMTP client tries to authenticate. I'd like to know what I'm doing wrong or what the next troubleshooting steps are. I'm about to pull my hair out. Below is some log and configuration data that I thought would be relevant. You're help is much obliged. The file /var/log/mail.log shows Oct 11 14:54:16 mailbox2 postfix/smtpd[25017]: connect from unknown[192.168.0.44] Oct 11 14:54:19 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: Oct 11 14:54:25 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:48 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL PLAIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:54 mailbox2 postfix/smtpd[25017]: warning: unknown[192.168.0.44]: SASL LOGIN authentication failed: VXNlcm5hbWU6 Oct 11 14:55:57 mailbox2 postfix/smtpd[25017]: disconnect from unknown[192.168.0.44] This is my dovecot.conf file log_timestamp = "%Y-%m-%d %H:%M:%S " mail_location = maildir:/var/mail/virtual/%d/%n/ auth_mechanisms = plain login disable_plaintext_auth = no namespace { inbox = yes location = prefix = INBOX. separator = . type = private } passdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocols = imap pop3 service auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0660 user = postfix } unix_listener auth-master { mode = 0600 user = postfix } user = root } ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { args = /etc/dovecot/dovecot-mysql.conf driver = sql } protocol lda { auth_socket_path = /var/run/dovecot/auth-master mail_plugins = sieve postmaster_address = [email protected] } protocol pop3 { pop3_uidl_format = %08Xu%08Xv } Here is my dovecot-mysql.conf file: connect = host=127.0.0.1 dbname=postfix user=postfix password=ffjM2MYAqQtAzRHX driver = mysql default_pass_scheme = MD5-CRYPT password_query = SELECT username AS user,password FROM mailbox WHERE username = '%u' AND active='1' user_query = SELECT CONCAT('/var/mail/virtual/', maildir) AS home, 1001 AS uid, 109 AS gid, CONCAT('*:messages=10000:bytes=',quota) as quota_rule, 'Trash:ignore' AS quota_rule2 FROM mailbox WHERE username = '%u' AND active='1' Here is my output from 'postconf -n': append_dot_mydomain = no biff = no bounce_template_file = /etc/postfix/bounce.cf broken_sasl_auth_clients = yes config_directory = /etc/postfix delay_warning_time = 0h dovecot_destination_recipient_limit = 1 inet_interfaces = all local_recipient_maps = $virtual_mailbox_maps local_transport = virtual mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 maximal_queue_lifetime = 1d message_size_limit = 25600000 mydestination = mailbox2.cws.net, debian.local.cws.net, localhost.local.cws.net, localhost myhostname = mailbox2.cws.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.119 63.164.138.3 myorigin = /etc/mailname proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps readme_directory = no recipient_delimiter = + relay_domains = relayhost = smtp_connect_timeout = 10 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_client_message_rate_limit = 50 smtpd_client_recipient_rate_limit = 500 smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_delay_reject = yes smtpd_discard_ehlo_keyword_address_maps = hash:/etc/postfix/discard_ehlo smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_invalid_helo_hostname, permit smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_catchall_maps.cf virtual_gid_maps = static:1001 virtual_mailbox_base = /var/mail/virtual/ virtual_mailbox_domains = proxy:mysql:/etc/postfix/sql/mysql_virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/sql/mysql_virtual_mailbox_maps.cf, proxy:mysql:/etc/postfix/sql/mysql_virtual_alias_domain_mailbox_maps.cf virtual_transport = dovecot virtual_uid_maps = static:1001

    Read the article

  • MSCC: Career & IT Fair 2014

    Already a couple of weeks ago, I've been addressed by Ibraahim and Yunus to see whether it would be interesting to participate in the 1st Career & IT Fair organised by the UoM Computer Club. Well, luckily we met at the Global Windows Azure Bootcamp and I wasn't too sure whether it would be possible for me to attend after all. The main reason is given because of work demand and furthermore due to the fact that the Mauritius Software Craftsmanship Community currently has no advertising material at all. Here's the brief statement of the event: "The UOM Students' Computer Club in collaboration with the UOM Students' Union and UOM CSE Department is organising a 'Career & IT Fair' on the 23rd and 24th April 2014. This event has for objective to provide a platform to tertiary students, secondary students as well as vocational students, the opportunity to meet job recruiters." Luckily, I was reminded that the 23rd is a Wednesday, and therefore I decided that it might be interesting to move our weekly Code & Coffee session to the university and hence be able to attend the career fair. As it turned out it was a great choice and thankfully Pritvi, Nadim as well as Ishwon volunteered to be around at the "community booth". Thankfully, the computer club gave us - the MSCC and the LUGM - one of their spaces in the lobby area of the Paul Octave Wiéhé Auditorium. My impression about the event Very well and professionally organised. Seriously, the lads over at the UoM Computer Club did a great job in organising their 2 days event, and felt very comfortable at any time. Actually, it was kind of amusing to some of the members constantly running around and checking everything. Even though that the whole process went smooth and easy off the hand. There were a couple of interesting pieces of information and announcements during the opening ceremony. For example, the Computer Science faculty is a very young one and has been initiated back in 1988 only - just by 4 staff members at that time. Now, after 25 years they have achieved quite a lot and there are currently 1.000+ active students attending the numerous lectures and courses. But there is no room to rest on previous achievements, and I was kind of surprised to hear that there are plans to extend the campus, and offer new lectures in the fields of nanotechnology, big data handling, and - crossing fingers - the introduction and establishment of a space control centre. Mauritius is already part of the Square Kilometre Array (SKA) and hopefully there will be more activities into that direction in the near future. Community - Awareness and collaboration As stated earlier, I could only spent one morning but luckily other members of the MSCC and the LUGM stayed during the whole two days and provided answers to any interested person. As for me, I took the opportunity to get in touch with the other companies in the lobby. Mainly, to create some awareness about our IT communities but also to see whether there might be options for future engagement in common activities, too. So far, I was able to speak to representatives of the following companies: ACCA Mauritius Business at Work Infomil LinkByNet Microsoft Indian Ocean Islands & French Pacific Spherinity Training Institute Spoon Consulting Ltd. State Informatics Ltd. Unfortunately, I only had a quick chat with an HR representative of LinkByNet but I fully count on our MSCC members like Nitin or LUGM member Ronny to spread our intentions over there.  So far, all of the representatives were really interested in our concepts and activities and I'm currently catching up with an introduction flyer for the MSCC that I'm going to send out to all those contacts via mail. It would be great to have more craftsmen as well as professional support on board. Some pictures from the event MSCC: Fantastic outlook for the near future. Announcements were made on Big data, nanotechnology, and space control centre in Mauritius. Interesting! MSCC: The lobby area was cramped with students. Great way to exchange and network. Good luck to all candidates! Passing the relay staff to... I recommend you to continue to read about the first Career & IT Fair on Ish's blog. He has a great summary and more details on those two days of IT activities than I have. Thanks and feel free to leave a comment (or two)... 

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • CodePlex Daily Summary for Wednesday, March 28, 2012

    CodePlex Daily Summary for Wednesday, March 28, 2012Popular Releasescallisto: callisto 2.0.22: Patched update contains new option to enable or disable file sharing.JSON Toolkit: JSON Toolkit 3.1: slight performance improvement (5% - 10%) new JsonException classSPSiteInstaller: SPSiteInstaller v1.3: Now you can upgrade your existing structure, just use ./remsolns ./addsolns ./createpages ./addwebparts as per your requirements, i.e. after adding more pages/webparts etc to the siteconfig.xml, you can now rerun the above as many times as you want, if a page already exists, it will be skipped, if a webpart is already on a page, it wont be added again.Picturethrill: Version 2.3.28.0: Straightforward image selection. New clean UI look. Super stable. Simplified user experience.Indent Guides for Visual Studio: Indent Guides v12 (beta 2): Note This beta is likely to be less stable than the previous one. If you have severe troubles using this version, please report them with as much detail as possible (especially other extensions/addins that you may have) and downgrade to the last stable release. Version History Changed since v12 (beta 1): new options dialog with Quick Set selections for behavior restructured settings storage (should be more reliable) asynchronous background document analysis glow effect now appears in p...SQL Monitor - managing sql server performance: SQL Monitor 4.2 alpha 16: 1. finally fixed problem with logic fault checking for temporary table name... I really mean finally ...ScintillaNET: ScintillaNET 2.5: A slew of bug-fixes with a few new features sprinkled in. This release also upgrades the SciLexer and SciLexer64 DLLs to version 3.0.4. The official stuff: Issue # Title 32402 32402 27137 27137 31548 31548 30179 30179 24932 24932 29701 29701 31238 31238 26875 26875 30052 30052 Mugen MVVM Toolkit: Mugen MVVM Toolkit ver 1.1: Added Design mode support.Multiwfn: Multiwfn 2.3.2: Multiwfn 2.3.2Harness: Harness 2.0.2: change to .NET Framework Client Profile bug fix the download dialog auto answer. bug fix setFocus command. add "SendKeys" command. remove "closeAll" command. minor bugs fixed.BugNET Issue Tracker: BugNET 0.9.161: Below is a list of fixes in this release. Bug BGN-2092 - Link in Email "visit your profile" not functional BGN-2083 - Manager of bugnet can not edit project when it is not public BGN-2080 - clicking on a link in the project summary causes error (0.9.152.0) BGN-2070 - Missing Functionality On Feed.aspx BGN-2069 - Calendar View does not work BGN-2068 - Time tracking totals not ok BGN-2067 - Issues List Page Size Bug: Index was out of range. Must be non-negative and less than the si...YAF.NET (aka Yet Another Forum.NET): v1.9.6.1 RTW: v1.9.6.1 FINAL is .NET v4.0 ONLY v1.9.6.1 has: Performance Improvements .NET v4.0 improvements Improved FaceBook Integration KNOWN ISSUES WITH THIS RELEASE: ON INSTALL PLEASE DON'T CHECK "Upgrade BBCode Extensions...". More complete change list and discussion here: http://forum.yetanotherforum.net/yaf_postst14201_v1-9-6-1-RTW-Dated--3-26-2012.aspxQuick Performance Monitor: Version 1.8.1: Added option to set main window to be 'Always On Top'. Use context (right-click) menu on graph to toggle.Asp.NET Url Router: v1.0: build for .net 2.0 and .net 4.0menu4web: menu4web 0.0.3: menu4web 0.0.3ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Final: This release installs both the ArcGIS Editor for OSM Server Component and/or ArcGIS Editor for OSM Desktop components. The Desktop tools allow you to download data from the OpenStreetMap servers and store it locally in a geodatabase. You can then use the familiar editing environment of ArcGIS Desktop to create, modify, or delete data. Once you are done editing, you can post back the edit changes to OSM to make them available to all OSM users. The Server Component allows you to quickly create...Craig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...DotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;New Projects.NET File Cache: FileCache is a concrete implementation of the .Net Framework 4's System.Runtime.Caching.ObjectCache that uses the local filesystem as the target location.ASP.NET MVC / Web API / Web Pages: This is the source code repository for open source ASP.NET products. The products include MVC, Web API and Web Pages with Razor.AudioManager in XNA: This project shows how to create an audio engine in xna. The audio engine support xna's simple audio libraries and XACT, The project is written using xna 4.0, c# programming language and the Visual Studio IDE.CLab: CLabDominorder: A multi-agent simulation of dominoes (the agents) that are placed in a random order on a board and then try to form some kind of ordering. (Used for evaluating the MASON framework) -all in java- Easy Eject: Easy Eject lets you eject USB drives quickly. Excel File Cleaner: This tool is designed to reduce the size and speed up Excel 2007/2010 workbooks. It will only work for the new XML File formats (XLSX and XLSM). It cleans the files by reducing the number of styles in use and the number of activated cells. See Microsoft KBs: KB213904 & KB244435.ForumMVC: the forum site use MVCFreshbooks.NET: C# Client Library to use the Freshbooks APIFrogger 3D: FROGGER 3D GAME: 3D remake of the classic Konami game: you have to help 5 frogs to reach their respective holes passing over a street full of cars and a river where you can only use trunks and waterlily to pass-by without drowning. Levels: the numbers of levels is potentially infinite, even if you will find levels over the 15th almost undoable... Note: Some third parties code & models have been used in the game, all rights reserved to respective owners... Project made for XNA exam of M...Gestione11: Management Software created with LightswitchIntel: The Dungeon of Doom: A text based RPG, mostly being done to learn C++. This summary will be expanded as the project is completed.ketabresan: ketabresan is an online book delivery projectKinect anti theft: Kinect anti theft application. It uses the capturemylog to save datas.Komon: enterprise framework targeting datadriven systemslocal.angle API: The local.angle API makes it easier for developers using .Net to integrate with the local.angle group of community websites.MessageTel: It`s a simple app for forward phone number You can download it from ZuneMarketplace for free. Just search "MessageTel"Modularity: Create modules for asp.net using a base class that helps you subscribe to application events easier than before and in a unit testable manner.MongoLP - A LinqPad Driver: A MongoDB LinqPad driver that uses the official C# driver from 10gen.MvcApplicationTeste: Projeto testeNokia Developer Days - Windows Phone: Nokia Developer Days - Windows PhoneOrchard Inline Editing: An implementation of inline editing for Orchard CMS, this version currently takes advantage of fluid infusion.Paulo FPU: Projeto exemplo.PES Championship Control: Projeto open-source que visa criar um aplicativo desktop/web para controle de um campeonato de pro-evolution soccer.projectforgit: projectforgitQuickbookWorkers: Hello!SharpUpdater: Auto-updater for Windows Desktop applicatons.SovoktvAPI: Library for Sovoktv REST APIStyleRepair: A tool that will automatically fix certain StyleCop warnings. Tool is based on StyleCopFixer with added functionality and will be hosted at http://stylerepair.codeplex.com This tool is still in development and more warnings will be added soon. Rules 1200-1207 are fixed using NArrange with optional regions. The tool allows for fixing multiple warnings at once, however, it is best at this point to fix groups of only the same warning number at once as warnings are not fixed in any pa...SWCustomRibbon: Create your custom Ribbon Tab, Groups and Buttons! So you can add your favourite links to the Ribbon menu quite easy.Theme Override Orchard module: This Orchard module adds configuration to the admin page that lets override the styling of the current theme (works with tenants too).txresearch: Research ProjectsUnity AutoRegisterExtension: This project is unity container extension that is attribute base register.Visual Localizer: Visual Localizer is an opensource plugin for Visual Studio 2008 and 2010. It makes localizing of completed C# projects much easier by providing such functions as automatic string-lookup, advanced work with ResX files etc.Vodigi Open Source Interactive Digital Signage: Vodigi is a free, open source, interactive digital signage software solution that offers all the features you need to promote and advertise your products and services. With Vodigi, you can have a virtual sales team dedicated to promoting and advertising your products and services... a team that knows your products and services inside and out, can provide detailed interactive information about your products and services, and is available any time... day or night... to help you succeed. ...WishProject: WishProject é um projeto voltado para estudos dos frameworks wicket, Spring e HibernateWP7PUBLISH: WP7PUBLISH is a framework for building content delivery system applications.WtCHJqueryPlugins: WtCHJqueryPluginsYouTube API Class & Server Control for ASP.NET 4.0: This project have option to use YouTube API easily into your asp.net application using the simple dll file. Also you can display YouTube Videos in your website using Server Control simply.

    Read the article

  • postfix: Temporary lookup failure for FQDN

    - by Thufir
    I'm using the FQDN of dur.bounceme.net which I want to resolve(?) to localhost. That is, I want mail to [email protected] to get delivered to user@localhost. I've tried following the Ubuntu guide on this and seem to be going in circles a bit. root@dur:~# root@dur:~# postfix stop postfix/postfix-script: stopping the Postfix mail system root@dur:~# postfix start postfix/postfix-script: starting the Postfix mail system root@dur:~# telnet dur.bounceme.net 25 Trying 127.0.1.1... telnet: Unable to connect to remote host: Connection refused root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# grep telnet /var/log/mail.log Aug 28 00:24:45 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> Aug 28 00:24:58 dur postfix/smtpd[18256]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:54:55 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur> Aug 28 00:55:08 dur postfix/smtpd[18825]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<dur> root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~#

    Read the article

  • Welcome to the ISV Migration Center (IMC) Team blog

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Welcome to the ISV Migration Center (IMC) Team blog.The IMC is a a team of senior Oracle technical consultants who's aim is to enable partners to rapidly and successfully adopt and implement Oracle's latest technology.  The IMC consultants are trained and equipped to deliver leading-edge, enterprise-quality technology solutions. This blog has been created to serve as an  information exchange platform on Oracle Fusion Middleware and Database products so you will find how-tos, articles, demos and other technical resources.  We will also publish our upcoming workshops, webcasts and seminars so make sure you check it regularly to get the latest updates.   Here's our team:Lukasz Romaszewski Java & middleware specialist, 8 years experience in architecting, developing and supporting enterprise solutions based on J2EE and Oracle Database technology. At Oracle from April 2008, working as an IMC Migration Consultant in Oracle Partner Hub in Cracow, Poland. Helping Oracle Partners in migrating their solutions to the latest Oracle Fusion Middleware stack, running hands-on migration workshops and seminars across Europe. Experienced in the following areas and products Oracle Weblogic Application Server 11gApplication Development Framework (ADF)Oracle SOA Suite 11gOracle Forms 6i, 10g and 11gOracle Database (PL/SQL, AQ, XML DB)Java EE 5.0 based architecture Murat Teksoz Oracle DB and DB options - Oracle Linux- Apex- Oracle Business intelligence specilist, 13 years experince in Database managment, Performans Tuning, Diagnosting ,Installation and Configurationg database, Database Security, High Avalibility and Disaster Recovery solutions. Working at Oracle IMC Istanbul from September 2008, delivering partner workshops and seminars in Europe and Central Asia. Experienced in the following areas and products Oracle 9i,10g,11g Database SolutionsOracle Partitioning, Total Recall Advantage compressingOracle High Avalability Solutions - Real Application ClusterOracle Disaster Recovery Solutions - Oracle DataguardOracle Grid ControlOracle LinuxOracle Business intelligence solutions - Oracle Bi 10g-11gMigration Tools (Sqldeveloper) - Migrate from SqlServer,Mysql,Sysbase,Db2 to Oracle DatabaseOracle APEX (Application Express Tool) Vadim Melnikov Oracle Database specialist with DB Options, Linux and virtualization skills. Vadim has more than 8 years experience with Oracle products and is now working as Database consultant in Oracle IMC Moscow as employee of FORS Development center, Russian Oracle Platinum partner. Helping Oracle Partners to migrate solutions to Oracle from other platforms and adopt new oracle technologies, running workshops and seminars. Experienced in the following areas and products Oracle Database 9i,10g,11g Database Solutions (SQL, PL/SQL, Installing, Configuring, Performance Tuning, Diagnosting, Database management)Oracle DB options (Partitioning, Total Recall, Advanced compression)Oracle Enterprise ManagerOracle Enterprise LinuxOracle VM 2 for x86Migration to Oracle DatabaseOracle Application Express Gokhan Gungor Java (J2EE) Lead Developer and Architect. Designed and Developed Web Applications, Middleware Systems/Services, Desktop Applications and Back-end Tools/Services using Java, WebLogic Server, JBoss and Open Source Frameworks. Joined Oracle in 2010 as Fussion middleware consultant in Istanbul IMC , responsible for running migration and adoption workshops and seminars covering Java technology, ADF, WebLogic and SOA and providing technical consultancy for migration projects. Experienced in the following areas and products Oracle WebLogic ServerApplication Development Framework (ADF)JDeveloperJava EE (EJB, JMS, Servlet, JSP, JSF, JavaMail, JTA, JAAS, JSTL, JAXB)Java SE (JavaBeans, JDBC, XML, XSL, RMI, JNDI, JAXP)Oracle Database 10g,11g Dmitry Nefedkin Oracle Middleware & Java specialist, 7+ years experience in developing, designing enterprise solutions based on Oracle Database and Middleware, developing Oracle e-Business Suite customizations, designing integration architecture within the companies . Joined Oracle team in October 2010 as IMC FMW Consultant in Oracle Alliances & Channels in Moscow, Russia. Experienced in the following areas and products Oracle Weblogic Application Server 11gOracle Service Bus 11gOracle SOA Suite 10g (BPEL PM, ESB, OWSM)Oracle Application Server 10gOracle Forms 6i and 9iOracle BI PublisherOracle ADF 10gOracle Database (SQL tuning, PL/SQL, AQ, Streams)Java EE 5 developmentCheck out our web site as well: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://www.oracle.com/partners/en/most-popular-resources/027930

    Read the article

  • Setting up VPN client: L2TP with IPsec

    - by zachar
    I've got to connect to vpn server. It works on Windows, but in Ubuntu 10.04 not. Number of options is confusing for me. There is the input that I have: IP Address of VPN Pre-shared key to authenticate Information that MS-CHAPv2 is used Login and Password to VPN I was trying to achive that with network manager and with L2TP IPsec VPN Manager 1.0.9 but at failed. There is some logged information from L2TP IPsec VPN Manager 1.0.9: Nov 09 15:21:46.854 ipsec_setup: Stopping Openswan IPsec... Nov 09 15:21:48.088 Stopping xl2tpd: xl2tpd. Nov 09 15:21:48.132 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 09 15:21:48.308 ipsec__plutorun: Starting Pluto subsystem... Nov 09 15:21:48.318 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 09 15:21:48.338 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 09 15:21:48.348 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 09 15:21:48.349 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 09 15:21:48.994 104 "my_vpn_name" #1: STATE_MAIN_I1: initiate Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [RFC 3947] method set to=109 Nov 09 15:21:48.994 003 "my_vpn_name" #1: received Vendor ID payload [Dead Peer Detection] Nov 09 15:21:48.994 106 "my_vpn_name" #1: STATE_MAIN_I2: sent MI2, expecting MR2 Nov 09 15:21:48.994 003 "my_vpn_name" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed Nov 09 15:21:48.994 108 "my_vpn_name" #1: STATE_MAIN_I3: sent MI3, expecting MR3 Nov 09 15:21:48.994 004 "my_vpn_name" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Nov 09 15:21:48.995 117 "my_vpn_name" #2: STATE_QUICK_I1: initiate Nov 09 15:21:48.995 004 "my_vpn_name" #2: STATE_QUICK_I2: sent QI2, IPsec SA established transport mode {ESP=>0x0c96795d <0x483e1a42 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=none DPD=none} Nov 09 15:21:49.996 [ERROR 210] Failed to open l2tp control file 'c my_vpn_name' and from syslog: Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:46 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup stop Nov 9 15:21:46 o99 ipsec_setup: Stopping Openswan IPsec... Nov 9 15:21:48 o99 kernel: [ 4350.245171] NET: Unregistered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec stopped Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd stop Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd stop finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Opening client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Closing client connection Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec setup start Nov 9 15:21:48 o99 kernel: [ 4350.312483] NET: Registered protocol family 15 Nov 9 15:21:48 o99 ipsec_setup: Starting Openswan IPsec U2.6.23/K2.6.32-49-generic... Nov 9 15:21:48 o99 ipsec_setup: Using NETKEY(XFRM) stack Nov 9 15:21:48 o99 kernel: [ 4350.410774] Initializing XFRM netlink socket Nov 9 15:21:48 o99 kernel: [ 4350.413601] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 kernel: [ 4350.427311] padlock: VIA PadLock Hash Engine not detected. Nov 9 15:21:48 o99 kernel: [ 4350.441533] padlock: VIA PadLock not detected. Nov 9 15:21:48 o99 ipsec_setup: ...Openswan IPsec started Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec setup start finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command invoke-rc.d xl2tpd start Nov 9 15:21:48 o99 ipsec__plutorun: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 pluto: adjusting ipsec.d to /etc/ipsec.d Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command invoke-rc.d xl2tpd start finished with exit code 0 Nov 9 15:21:48 o99 ipsec__plutorun: 002 added connection description "my_vpn_name" Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --ready Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying new style NAT-T Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Nov 9 15:21:48 o99 ipsec__plutorun: 003 NAT-Traversal: Trying old style NAT-T Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --ready finished with exit code 0 Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Executing command ipsec auto --up my_vpn_name Nov 9 15:21:48 o99 L2tpIPsecVpnControlDaemon: Command ipsec auto --up my_vpn_name finished with exit code 0 Nov 9 15:21:49 o99 L2tpIPsecVpnControlDaemon: Closing client connection Can anyone tell me something more about that? Where is the mistake?

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Centos 7. Freeradius fails to start on boot

    - by Alex
    I was messing around with FreeRADIUS and MySQL (MariaDB) and it seems FreeRADIUS service can't start properly on startup. But it starts fine using root user or in debug mode (radiusd -X) and works just fine! Debug mode shows no errors. systemctl command shows that radiusd.service has failed to start. /var/log/messages output: Aug 21 15:52:29 nexus-test systemd: Starting The Apache HTTP Server... Aug 21 15:52:29 nexus-test systemd: Starting MariaDB database server... Aug 21 15:52:29 nexus-test systemd: Starting FreeRADIUS high performance RADIUS server.... Aug 21 15:52:29 nexus-test systemd: Started OpenSSH server daemon. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'. Aug 21 15:52:29 nexus-test mysqld_safe: 140821 15:52:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Aug 21 15:52:30 nexus-test systemd: Started Postfix Mail Transport Agent. Aug 21 15:52:30 nexus-test avahi-daemon[604]: Registering new address record for fe80::250:56ff:fe85:e4af on eth0.*. Aug 21 15:52:30 nexus-test systemd: radiusd.service: control process exited, code=exited status=1 Aug 21 15:52:30 nexus-test systemd: Failed to start FreeRADIUS high performance RADIUS server.. Aug 21 15:52:30 nexus-test systemd: Unit radiusd.service entered failed state. Aug 21 15:52:31 nexus-test kdumpctl: kexec: loaded kdump kernel Aug 21 15:52:31 nexus-test kdumpctl: Starting kdump: [OK] Aug 21 15:52:31 nexus-test systemd: Started Crash recovery kernel arming. Aug 21 15:52:31 nexus-test systemd: Started The Apache HTTP Server. Aug 21 15:52:31 nexus-test systemd: Started MariaDB database server. /var/log/radius/radius.log output: Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Attempting to connect to database "radius" Thu Aug 21 15:24:16 2014 : Info: rlm_sql (sql): Opening additional connection (0) Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Couldn't connect socket to MySQL server radius@localhost:radius Thu Aug 21 15:24:16 2014 : Error: rlm_sql_mysql: Mysql error 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)' Thu Aug 21 15:24:16 2014 : Error: rlm_sql (sql): Opening connection failed (0) Thu Aug 21 15:24:16 2014 : Error: /etc/raddb/mods-enabled/sql[47]: Instantiation failed for module "sql" After seeing this I tried to replicate the problem, killed mariadb.service and started to run debug mode again. And it spits out the same problem as in the radius.log. I tried disabling iptables and firewalld and rebooting, but no luck: systemctl disable iptables systemctl disable firewalld So maybe the problem is in the process startup order or delay of some kind. Maybe FreeRADIUS's SQL module can't connect to not yet started MariaDB? If it, how can I fix this? In earlier versions of RHEL/CENTOS I know you easily see service start order in like rc.d or stuff, now IDK. I am new to this fancy "systemd", "systemctl", "firewalld" stuff Centos 7 introduced so sorry I'm a little bit confused. Also this new FreeRADIUS 3 structure... PS. MariaDB is enabled on startup, credentials in FR DB configuration are correct A little update: cat /etc/systemd/system/multi-user.target.wants/radiusd.service output: [Unit] Description=FreeRADIUS high performance RADIUS server. After=syslog.target network.target [Service] Type=forking PIDFile=/var/run/radiusd/radiusd.pid ExecStartPre=-/bin/chown -R radiusd.radiusd /var/run/radiusd ExecStartPre=/usr/sbin/radiusd -C ExecStart=/usr/sbin/radiusd -d /etc/raddb ExecReload=/usr/sbin/radiusd -C ExecReload=/bin/kill -HUP $MAINPID [Install] WantedBy=multi-user.target

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • How to get ISA 2006 Web Proxy to work with the Single Network Adapter template

    - by tronda
    I need to test an issue with running our application behind a proxy server with different type of configurations, so I installed ISA 2006 Enterprise on a desktop computer. Since this computer only has a single network card and I want to start out easy, I chose the "Single Network Adapter" template. We have a internal NAT'ed network which is in the 10 range. I have defined the internal network on the ISA server to be 10.XXX.YY.1 - 10.XXX.YY.255 I also have the Default rule which denies all traffic, but I've added the following Rule: Policy - Protocols - From - To Accept HTTP Internal External HTTPS Local Host Internal HTTS Server Localhost Then I configured Internet Explorer on a virutal machine running XP within virtualbox with Brigded network (gets same network address range as regular computers on our network) similar to this Instead of the server name I used the IP address. When I try to access a web page, this doesn't go through and I get the following log messages on the proxy server: Original Client IP Client Agent Authenticated Client Service Referring Server Destination Host Name Transport HTTP Method MIME Type Object Source Source Proxy Destination Proxy Bidirectional Client Host Name Filter Information Network Interface Raw IP Header Raw Payload GMT Log Time Source Port Processing Time Bytes Sent Bytes Received Cache Information Error Information Authentication Server Log Time Client IP Destination IP Destination Port Protocol Action Rule Result Code HTTP Status Code Client Username Source Network Destination Network URL Server Name Log Record Type 10.XXX.YY.174 - TCP - - - 24.08.2010 13:25:24 1080 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.174 10.XXX.YY.175 80 HTTP Initiated Connection MyHTTPAccess 0x0 ERROR_SUCCESS Internal Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:24 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:24 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2275 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:25 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:25 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2276 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:26 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Initiated Connection 0x0 ERROR_SUCCESS Local Host Local Host - PROXYTEST Firewall 10.XXX.YY.159 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.159 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 10.XXX.YY.166 - UDP - - - 24.08.2010 13:25:26 68 0 0 0 0x0 0x0 - 24.08.2010 06:25:26 10.XXX.YY.166 255.255.255.255 67 DHCP (request) Denied Connection [Enterprise] Default rule 0xc004000d FWX_E_POLICY_RULES_DENIED Internal Local Host - PROXYTEST Firewall 0.0.0.0 Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Yes Proxy 10.XXX.YY.175 TCP GET Internet - - - Req ID: 096c76ae; Compression: client=No, server=No, compress rate=0% decompress rate=0% - - - 24.08.2010 13:25:27 0 2945 2581 446 0x0 0x40 24.08.2010 06:25:27 10.XXX.YY.174 10.XXX.YY.175 80 http Failed Connection Attempt MyHTTPAccess 10061 anonymous Internal Local Host http://www.vg.no/ PROXYTEST Web Proxy Filter 10.XXX.YY.175 - TCP - - - 24.08.2010 13:25:27 2277 0 0 0 0x0 0x0 - 24.08.2010 06:25:27 10.XXX.YY.175 10.XXX.YY.175 80 HTTP Closed Connection 0x80074e20 FWX_E_GRACEFUL_SHUTDOWN Local Host Local Host - PROXYTEST Firewall

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • Training a 'replacement', how to enforce standards?

    - by Mohgeroth
    Not sure that this is the right stack exchange site to ask this of, but here goes... Scope I work for a small company that employs a few hundred people. The development team for the company is small and works out of visual foxpro. A specific department in the company hired me in as a 'lone gunman' to fix and enhance a pre-existing invoicing system. I've successfully taken an Access application that suffered from a lot of risks and limitations and converted it into a C# application driven off of a SQL server backend. I have recently obtained my undergraduate and am no expert by any means. To help make up for that I've felt that earning microsoft certifications will force me to understand more about .net and how it functions. So, after giving my notice with 9 months in advance, 3 months ago a replacement finally showed up. Their role is to learn what I have been designing to an attempt to support the applications designed in C#. The Replacement Fresh out of college with no real-world work experience, the first instinct for anything involving data was and still is listboxes... any time data is mentioned the list box is the control of choice for the replacement. This has gotten to the point, no matter how many times I discuss other controls, where I've seen 5 listboxes on a single form. Classroom experience was almost all C++ console development. So, an example of where I have concern is in a winforms application: Users need to key Reasons into a table to select from later. Given that I know that a strongly typed data set exists, I can just drag the data source from the toolbox and it would create all of this for me. I realize this is a simple example but using databinding is the key. For the past few months now we have been talking about the strongly typed dataset, how to use it and where it interacts with other controls. Data sets, how they work in relation to binding sources, adapters and data grid views. After handing this project off I expected questions about how to implement these since for me this is the way to do it. What happened next simply floors me: An instance of an adapter from the strongly typed dataset was created in the activate event of the form, a table was created and filled with data. Then, a loop was made to manually add rows to a listbox from this table. Finally, a variable was kept to do lookups to figure out what ID the record was for updates if required. How do they modify records you ask? That was my first question too. You won't believe how simple it is, all you do it double click and they type into a pop-up prompt the new value to change it to. As a data entry operator, all the modal popups would drive me absolutely insane. The final solution exceeds 100 lines of code that must be maintained. So my concern is that none of this is sinking in... the department is only allowed 20 hours a week of their time. Up until last week, we've only been given 4-5 hours a week if I'm lucky. The past week or so, I've been lucky to get 10. Question WHAT DO I DO?! I have 4 weeks left until I leave and they fully 'support' this application. I love this job and the opportunity it has given me but it's time for me to spread my wings and find something new. I am in no way, shape or form convinced that they are ready to take over. I do feel that the replacement has the technical ability to 'figure it out' but instead of learning they just write code to do all of this stuff manually. If the replacement wants to code differently in the end, as long as it works I'm fine with that as horrifiying at it looks. However to support what I have designed they MUST to understand how it works and how I have used controls and the framework to make 'magic' happen. This project has about 40 forms, a database with over 30 some odd tables, triggers and stored procedures. It relates labor to invoices to contracts to projections... it's not as simple as it was three years ago when I began this project and the department is now in a position where they cannot survive without it. How in the world can I accomplish any of the following?: Enforce standards or understanding in constent design when the department manager keeps telling them they can do it however they want to Find a way to engage the replacement in active learning of the framework and system design that support must be given for Gracefully inform sr. management that 5-9 hours a week is simply not enough time to learn about the department, pre-existing processes, applications that need to be supported AND determine where potential enhancements to the system go... Yes I know this is a wall of text, thanks for reading through me but I simply don't know what I should be doing. For me, this job is a monster of a reference and things would look extremely bad if I left and things fell apart. How do I handle this?

    Read the article

  • CPU/JVM/JBoss 7 slows down over time

    - by lukas
    I'm experiencing performance slow down on JBoss 7.1.1 Final. I wrote simple program that demostrates this behavior. I generate an array of 100,000 of random integers and run bubble sort on it. @Model public class PerformanceTest { public void proceed() { long now = System.currentTimeMillis(); int[] arr = new int[100000]; for(int i = 0; i < arr.length; i++) { arr[i] = (int) (Math.random() * 200000); } long now2 = System.currentTimeMillis(); System.out.println((now2 - now) + "ms took to generate array"); now = System.currentTimeMillis(); bubbleSort(arr); now2 = System.currentTimeMillis(); System.out.println((now2 - now) + "ms took to bubblesort array"); } public void bubbleSort(int[] arr) { boolean swapped = true; int j = 0; int tmp; while (swapped) { swapped = false; j++; for (int i = 0; i < arr.length - j; i++) { if (arr[i] > arr[i + 1]) { tmp = arr[i]; arr[i] = arr[i + 1]; arr[i + 1] = tmp; swapped = true; } } } } } Just after I start the server, it takes approximately 22 seconds to run this code. After few days of JBoss 7.1.1. running, it takes 330 sec to run this code. In both cases, I launch the code when the CPU utilization is very low (say, 1%). Any ideas why? I run the server with following arguments: -Xms1280m -Xmx2048m -XX:MaxPermSize=2048m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Duser.timezone=UTC -Djboss.server.default.config=standalone-full.xml -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n I'm running it on Linux 2.6.32-279.11.1.el6.x86_64 with java version "1.7.0_07". It's within J2EE applicaiton. I use CDI so I have a button on JSF page that will call method "proceed" on @RequestScoped component PerformanceTest. I deploy this as separate war file and even if I undeploy other applications, it doesn't change the performance. It's a virtual machine that is sharing CPUs with another machine but that one doesn't consume anything. Here's yet another observation: when the server is after fresh start and I run the bubble sort, It utilizes 100% of one processor core. It never switches to another core or drops utilization below 95%. However after some time the server is running and I'm experiencing the performance problems, the method above is utilizing CPU core usually 100%, however I just found out from htop that this task is being switched very often to other cores. That is, at the beginning it's running on core #1, after say 2 seconds it's running on #5 then after say 2 seconds #8 etc. Furthermore, the utilization is not kept at 100% at the core but sometimes drops to 80% or even lower. For the server after fresh start, even though If I simulate a load, it never switches the task to another core.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Mark Hurd on the Customer Revolution: Oracle's Top 10 Insights

    - by Richard Lefebvre
    Reprint of an article from Forbes Businesses that fail to focus on customer experience will hear a giant sucking sound from their vanishing profitability. Because in today’s dynamic global marketplace, consumers now hold the power in the buyer-seller equation, and sellers need to revamp their strategy for this new world order. The ability to relentlessly deliver connected, personalized and rewarding customer experiences is rapidly becoming one of the primary sources of competitive advantage in today’s dynamic global marketplace. And the inability or unwillingness to realize that the customer is a company’s most important asset will lead, inevitably, to decline and failure. Welcome to the lifecycle of customer experience, in which consumers explore, engage, shop, buy, ask, compare, complain, socialize, exchange, and more across multiple channels with the unconditional expectation that each of those interactions will be completed in an efficient and personalized manner however, wherever, and whenever the customer wants. While many niche companies are offering point solutions within that sprawling and complex spectrum of needs and requirements, businesses looking to deliver superb customer experiences are still left having to do multiple product evaluations, multiple contract negotiations, multiple test projects, multiple deployments, and–perhaps most annoying of all–multiple and never-ending integration projects to string together all those niche products from all those niche vendors. With its new suite of customer-experience solutions, Oracle believes it can help companies unravel these challenges and move at the speed of their customers, anticipating their needs and desires and creating enduring and profitable relationships. Those solutions span the full range of marketing, selling, commerce, service, listening/insights, and social and collaboration tools for employees. When Oracle launched its suite of Customer Experience solutions at a recent event in New York City, president Mark Hurd analyzed the customer experience revolution taking place and presented Oracle’s strategy for empowering companies to capitalize on this important market shift. From Hurd’s presentation and related materials, I’ve extracted a list of Hurd’s Top 10 Insights into the Customer Revolution. 1. Please Don’t Feed the Competitor’s Pipeline!After enduring a poor experience, 89% of consumers say they would immediately take their business to your competitor. (Except where noted, the source for these findings is the 2011 Customer Experience Impact (CEI) Report including a survey commissioned by RightNow (acquired by Oracle in March 2012) and conducted by Harris Interactive.) 2. The Addressable Market Is Massive. Only 1% of consumers say their expectations were always met by their actual experiences. 3. They’re Willing to Pay More! In return for a great experience, 86% of consumers say they’ll pay up to 25% more. 4. The Social Media Microphone Is Always Live. After suffering through a poor experience, more than 25% of consumers said they posted a negative comment on Twitter or Facebook or other social media sites. Conversely, of those consumers who got a response after complaining, 22% posted positive comments about the company. 5.  The New Deal Is Never Done: Embrace the Entire Customer Lifecycle. An appropriately active and engaged relationship, says Hurd, extends across every step of the entire processs: need, research, select, purchase, receive, use, maintain, and recommend. 6. The 360-Degree Commitment. Customers want to do business with companies that actively and openly demonstrate the desire to establish strong and seamless connections across employees, the company, and the customer, says research firm Temkin Group in its report called “The CX Competencies.” 7. Understand the Emotional Drivers Behind Brand Love. What makes consumers fall in love with a brand? Among the top factors are friendly employees and customer reps (73%), easy access to information and support (55%), and personalized experiences, such as when companies know precisely what products or services customers have purchased in the past and what issues those customers have raised (36%). 8.  The Importance of Immediate Action. You’ve got one week to respond–and then the opportunity’s lost. If your company needs more than a week to answer a prospect’s question or request, most of those prospects will terminate the relationship. 9.  Want More Revenue, Less Churn, and More Referrals? Then improve the overall customer experience: Forrester’s research says that approach put an extra $900 million in the pockets of wireless service providers, $800 million for hotels, and $400 million for airlines. 10. The Formula for CX Success.  Hurd says it includes three elegantly interlaced factors: Connected Engagement, to personalize the experience; Actionable Insight, to maximize the engagement; and Optimized Execution, to deliver on the promise of value. RECOMMENDED READING: The Top 10 Strategic CIO Issues For 2013 Wal-Mart, Amazon, eBay: Who’s the Speed King of Retail? Career Suicide and the CIO: 4 Deadly New Threats Memo to Marc Benioff: Social Is a Tool, Not an App

    Read the article

  • Myths about Coding Craftsmanship part 2

    - by tom
    Myth 3: The source of all bad code is inept developers and stupid people When you review code is this what you assume?  Shame on you.  You are probably making assumptions in your code if you are assuming so much already.  Bad code can be the result of any number of causes including but not limited to using dated techniques (like boxing when generics are available), not following standards (“look how he does the spacing between arguments!” or “did he really just name that variable ‘bln_Hello_Cats’?”), being redundant, using properties, methods, or objects in a novel way (like switching on button.Text between “Hello World” and “Hello World “ //clever use of space character… sigh), not following the SOLID principals, hacking around assumptions made in earlier iterations / hacking in features that should be worked into the overall design.  The first two issues, while annoying are pretty easy to spot and can be fixed so easily.  If your coding team is made up of experienced professionals who are passionate about staying current then these shouldn’t be happening.  If you work with a variety of skills, backgrounds, and experience then there will be some of this stuff going on.  If you have an opportunity to mentor such a developer who is receptive to constructive criticism don’t be a jerk; help them and the codebase will improve.  A little patience can improve the codebase, your work environment, and even your perspective. The novelty and redundancy I have encountered has often been the use of creativity when language knowledge was perceived as unavailable or too time consuming.  When developers learn on the job you get a lot of this.  Rather than going to MSDN developers will use what they know.  Depending on the constraints of their assignment hacking together what they know may seem quite practical.  This was not stupid though I often wonder how much time is actually “saved” by hacking.  These issues are often harder to untangle if we ever do.  They can also grow out of control as we write hack after hack to make it work and get back to some development that is satisfying. Hacking upon an existing hack is what I call “feeding the monster”.  Code monsters are anti-patterns and hacks gone wild.  The reason code monsters continue to get bigger is that they keep growing in scope, touching more and more of the application.  This is not the result of dumb developers. It is probably the result of avoiding design, not taking the time to understand the problems or anticipate or communicate the vision of the product.  If our developers don’t understand the purpose of a feature or product how do we expect potential customers to do so? Forethought and organization are often what is missing from bad code.  Developers who do not use the SOLID principals should be encouraged to learn these principals and be given guidance on how to apply them.  The time “saved” by giving hackers room to hack will be made up for and then some. Not as technical debt but as shoddy work that if not replaced will be struggled with again and again.  Bad code is not the result of dumb developers (usually) it is the result of trying to do too much without the proper resources and neglecting the right thing that needs doing with the first thoughtless thing that comes into our heads. Object oriented code is all about relationships between objects.  Coders who believe their coworkers are all fools tend to write objects that are difficult to work with, not eager to explain themselves, and perform erratically and irrationally.  If you constantly find you are surrounded by idiots you may want to ask yourself if you are being unreasonable, if you are being closed minded, of if you have chosen the right profession.  Opening your mind up to the idea that you probably work with rational, well-intentioned people will probably make you a better coder and it might even make you less grumpy.  If you are surrounded by jerks who do not engage in the exchange of ideas who do not care about their customers or the durability of the code you are building together then I suggest you find a new place to work.  Myth 4: Customers don’t care about “beautiful” code Craftsmanship is customer focused because it means that the job was done right, the product will withstand the abuse, modifications, and scrutiny of our customers.  Users can appreciate a predictable timeline for a release, a product delivered on time and on budget, a feature set that does not interfere with the task(s) it is supporting, quick turnarounds on exception messages, self healing issues, and less issues.  These are all hindered by skimping on craftsmanship.  When we write data access and when we write reusable code.   What do you think?  Does bad code come primarily from low IQ individuals?  Do customers care about beautiful code?

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • A Visual Studio Release Grows in Brooklyn

    - by andrewbrust
    Yesterday, Microsoft held its flagship launch event for Office 2010 in Manhattan.  Today, the Redmond software company is holding a local launch event for Visual Studio (VS) 2010, in Brooklyn.  How come information workers get the 212 treatment and developers are relegated to 718? Well, here’s the thing: the Brooklyn Marriott is actually a great place for an event, but you need some intimate knowledge of New York City to know that.  NBC’s Studio 8H, where the Office launch was held yesterday (and from where SNL is broadcast) is a pretty small venue, but you’d need some inside knowledge to recognize that.  Likewise, while Office 2010 is a product whose value is apparent.  Appreciating VS 2010’s value takes a bit more savvy.  Setting aside its year-based designation, this release of VS, counting the old Visual Basic releases, is the 10th version of the product.  How can a developer audience get excited about an integrated development environment when it reaches double-digit version numbers?  Well, it can be tough.  Luckily, Microsoft sent Jay Schmelzer, a Group Program Manager from the Visual Studio team in Redmond, to come tell the Brooklyn audience why they should be excited. Turns out there’s a lot of reasons.  Support fro SharePoint development is a big one.  In previous versions of VS, that support has been anemic, at best.  Shortage of SharePoint developers is a huge issue in the industry, and this should help.  There’s also built in support for Windows Azure (Microsoft’s cloud platform) and, through a download, support for the forthcoming Windows Phone 7 platform.  ASP.NET MVC, a “close-to-the-metal” Web development option that does away with the Web Forms abstraction layer, has a first-class presence in VS.  So too does jQuery, the Open Source environment that makes JavaScript development a breeze.  The jQuery support is so good that Microsoft now contributes to that Open Source project and offers IntelliSense support for it in the code editor. Speaking of the VS code editor, it now supports multi-monitor setups, zoom-in, and block selection.  If you’re not a developer, this may sound confusing and minute.  I’ll just say that for people who are developers these are little things that really contribute to productivity, and that translates into lower development costs. The really cool demo, though, was around Visual Studio 2010’s new debugging features.  This stuff is hard to showcase, but I believe it’s truly breakthrough technology: imagine being able to step backwards in time to see what might have caused a bug.  Cool?  Now imagine being able to do that, even if you weren’t the tester and weren’t present while the testing was being done.  Then imagine being able to see a video screen capture of what the tester was doing with your app when the bug occurred.  VS 2010 allows all that.  This could be the demise of the IWOMM (“it works on my machine”) syndrome. After the keynote, I asked Schmelzer if any of Microsoft’s competitors have debugging tools that come close to VS 2010’s.  His answer was an earnest “we don’t think so.”  If that’s true, that’s a big deal, and a huge advantage for developer teams who adopt it.  It will make software development much cheaper and more efficient.  Kind of like holding a launch event at the Brooklyn Marriott instead of 30 Rock in Manhattan! VS 2010 (version 10) and Office 2010 (version 14) aren’t the only new product versions Microsoft is releasing right now.  There’s also SQL Server 2008 R2 (version 10.5), Exchange 2010 (version 8, I believe), SharePoint 2010 (version 4) and, of course, Windows 7.  With so many new versions at such levels of maturity, I think it’s fair to say Microsoft has reached middle-age.  How does a company stave off a potential mid-life crisis, especially when with young Turks like Google coming along and competing so fiercely?  Hard to say.  But if focusing on core value, including value that’s hard to play into a sexy demo, is part oft the answer, then Microsoft’s doing OK.  And if some new tricks, like Windows Phone 7, can gain some traction, that might round things out nicely. Are the legacy products old tricks, or are they revised classics?  I honestly don’t know, because it’s the market’s prerogative to pass that judgement.  I can say this though: based on today’s show, I think Microsoft’s been doing its homework.

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >