Search Results

Search found 30038 results on 1202 pages for 'information technology managment'.

Page 120/1202 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Gentoo Linux useful utilities

    - by Alakdae
    I want to make a list of utilities that come in handy in Gentoo (general Linux tools available in all distributions also appreciated). What tools and commands do you use and consider helpful in administration of a Gentoo server? I will update the list with command from answers from time to time. eclean Utility for cleaning distfiles and binary packages. Usage example: eclean distfiles Usage example output: Cleans out the files in /usr/portage/distfiles. Pretty handy. Package: app-portage/gentoolkit eix Very useful tool for getting information about a package. Similar to "emerge -s" but much faster and more precise. Usage example: eix gentoolkit Usage example output: Show information about package such as: available versions, masked versions, installed versions and description. Package: app-portage/eix eix-test-obsolete Check system for obsolete, redundant, uninstalled entries in package.keywords, package.mask, package.unmask, package.use and package.cflags Usage example: eix-test-obsolete Usage example output: Shows non-matching entries, redundant entries, and uninstalled entries. Package: app-portage/eix equery Another very useful tool for getting information about packages (listing package files, checking which files belong to which package and much more) Usage example: equery b emerge Usage example output: Show which packages installed a file called emerge Package: app-portage/gentoolkit genlop Utility for extracting information about emerged ebuilds Usage example: genlop -l --date yesterday Usage example output: Show a list of packages that have been emerged yesterdayPackage: app-portage/genlop glsa-check Checks system if it's affected by GLSAs (security issues) Usage example: glsa-check -l affected Usage example output: List of GLSA that the system is affected by. Package: app-portage/gentoolkit rc-update Utility for managing (adding, deleting) runlevel scripts. Usage example: rc-update add syslog-ng default Usage example output: Adds syslog-ng to default runlevel. Package: sys-apps/baselayout revdep-rebuild Scans libraries and binaries for missing shared library dependencies Usage example: revdep-rebuild Usage example output: Gather binaries and libraries information, check for dependencies, rebuild packages with missing dependencies Package: app-portage/gentoolkit

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard By Nitesh Jain

    - by JuergenKress
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes. Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals. Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint. It also shows the detailed view of a specific Endpoint. The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message. For more information please visit Nitesh Jain blog SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Suite,SOA heathcare,soa health,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to efficiently merge a lot of vCard files for the same person?

    - by mihi
    I currently have contact information at several places: old PDA's address book mobile phone's phone book (primarily name, phone number) email client's address book (primarily name, email) web mailer's address book (primarily name, email) instant messenger's contact list (primarily name, im, email, birthday) And there are several social or business networking sites on the Internet where contacts provide information about themselves, like LinkedIn or XING. All those sources can export as vCard, but as you might imagine, I get a lot of vCards for the very same contact that way. Are there any tools where I can import them and then merge them (it may ask me which phone number is more current in case of field clashes of course)? Bonus points if it can track which information I have discarded so when I re-export all information from one of the sources I can't import to (networking sites), it won't ask me again if I want to overwrite phone number of person X with the same ancient number... I hope you understand what I try to accomplish, if not just ask :-)

    Read the article

  • More details on America's Cup use of Oracle Data Mining

    - by charlie.berger
    BMW Oracle Racing's America's Cup: A Victory for Database Technology BMW Oracle Racing's victory in the 33rd America's Cup yacht race in February showcased the crew's extraordinary sailing expertise. But to hear them talk, the real stars weren't actually human. "The story of this race is in the technology," says Ian Burns, design coordinator for BMW Oracle Racing. Gathering and Mining Sailing DataFrom the drag-resistant hull to its 23-story wing sail, the BMW Oracle USA trimaran is a technological marvel. But to learn to sail it well, the crew needed to review enormous amounts of reliable data every time they took the boat for a test run. Burns and his team collected performance data from 250 sensors throughout the trimaran at the rate of 10 times per second. An hour of sailing alone generates 90 million data points.BMW Oracle Racing turned to Oracle Data Mining in Oracle Database 11g to extract maximum value from the data. Burns and his team reviewed and shared raw data with crew members daily using a Web application built in Oracle Application Express (Oracle APEX). "Someone would say, 'Wouldn't it be great if we could look at some new combination of numbers?' We could quickly build an Oracle Application Express application and share the information during the same meeting," says Burns. Analyzing Wind and Other Environmental ConditionsBurns then streamed the data to the Oracle Austin Data Center, where a dedicated team tackled deeper analysis. Because the data was collected in an Oracle Database, the Data Center team could dive straight into the analytics problems without having to do any extract, transform, and load processes or data conversion. And the many advanced data mining algorithms in Oracle Data Mining allowed the analytics team to build vital performance analytics. For example, the technology team could remove masking elements such as environmental conditions to give accurate data on the best mast rotation for certain wind conditions. Without the data mining, Burns says the boat wouldn't have run as fast. "The design of the boat was important, but once you've got it designed, the whole race is down to how the guys can use it," he says. "With Oracle database technology we could compare the incremental improvements in our performance from the first day of sailing to the very last day. With data mining we could check data against the things we saw, and we could find things that weren't otherwise easily observable and findable."

    Read the article

  • Ubuntu turns off instead of suspend / sleeping

    - by Marcos
    I'm using Ubuntu 11.10 on my notebook as the main OS. How ever, I've been facing a serious problem: Every time I try to suspend it, or it just stays idle for a while, the computer shuts down, instead of suspending. It actually seems to be suspended, but when I press the button the awake it, it turns on, the open works are all lost. How can i fix this? P.s: On windows 7 the suspend / sleep resumes just fine. Here is a complete list of hardware: 00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor Family DRAM Controller [8086:0104] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 00:16.0 Communication controller [0780]: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 [8086:1c3a] (rev 04) 00:1a.0 USB Controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 [8086:1c2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller [8086:1c20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 [8086:1c10] (rev b4) 00:1c.1 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 [8086:1c12] (rev b4) 00:1c.2 PCI bridge [0604]: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 [8086:1c14] (rev b4) 00:1d.0 USB Controller [0c03]: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 [8086:1c26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation HM65 Express Chipset Family LPC Controller [8086:1c49] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller [8086:1c22] (rev 04) 01:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) 02:00.0 System peripheral [0880]: JMicron Technology Corp. SD/MMC Host Controller [197b:2382] (rev 80) 02:00.2 SD Host controller [0805]: JMicron Technology Corp. Standard SD Host Controller [197b:2381] (rev 80) 02:00.3 System peripheral [0880]: JMicron Technology Corp. MS Host Controller [197b:2383] (rev 80) 02:00.5 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 03) I've updated the kernel to 3.2 still and hibernate or suspend is still not working. SWAP is 1/4 of my RAM.

    Read the article

  • Making a Statement: How to retrieve the T-SQL statement that caused an event

    - by extended_events
    If you’ve done any troubleshooting of T-SQL, you know that sooner or later, probably sooner, you’re going to want to take a look at the actual statements you’re dealing with. In extended events we offer an action (See the BOL topic that covers Extended Events Objects for a description of actions) named sql_text that seems like it is just the ticket. Well…not always – sounds like a good reason for a blog post. When is a statement not THE statement? The sql_text action returns the same information that is returned from DBCC INPUTBUFFER, which may or may not be what you want. For example, if you execute a stored procedure, the sql_text action will return something along the lines of “EXEC sp_notwhatiwanted” assuming that is the statement you sent from the client. Often times folks would like something more specific, like the actual statements that are being run from within the stored procedure or batch. Enter the stack Extended events offers another action, this one with the descriptive name of tsql_stack, that includes the sql_handle and offset information about the statements being run when an event occurs. With the sql_handle and offset values you can retrieve the specific statement you seek using the DMV dm_exec_sql_statement. The BOL topic for dm_exec_sql_statement provides an example for how to extract this information, so I’ll cover the gymnastics required to get the sql_handle and offset values out of the tsql_stack data collected by the action. I’m the first to admit that this isn’t pretty, but this is what we have in SQL Server 2008 and 2008 R2. We will be making it easier to get statement level information in the next major release of SQL Server. The sample code For this example I have a stored procedure that includes multiple statements and I have a need to differentiate between those two statements in my tracing. I’m going to track two events: module_end tracks the completion of the stored procedure execution and sp_statement_completed tracks the execution of each statement within a stored procedure. I’m adding the tsql_stack events (since that’s the topic of this post) and the sql_text action for comparison sake. (If you have questions about creating event sessions, check out Pedro’s post Introduction to Extended Events.) USE AdventureWorks2008GO -- Test SPCREATE PROCEDURE sp_multiple_statementsASSELECT 'This is the first statement'SELECT 'this is the second statement'GO -- Create a session to look at the spCREATE EVENT SESSION track_sprocs ON SERVERADD EVENT sqlserver.module_end (ACTION (sqlserver.tsql_stack, sqlserver.sql_text)),ADD EVENT sqlserver.sp_statement_completed (ACTION (sqlserver.tsql_stack, sqlserver.sql_text))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO -- Start the sessionALTER EVENT SESSION track_sprocs ON SERVERSTATE = STARTGO -- Run the test procedureEXEC sp_multiple_statementsGO -- Stop collection of events but maintain ring bufferALTER EVENT SESSION track_sprocs ON SERVERDROP EVENT sqlserver.module_end,DROP EVENT sqlserver.sp_statement_completedGO Aside: Altering the session to drop the events is a neat little trick that allows me to stop collection of events while keeping in-memory targets such as the ring buffer available for use. If you stop the session the in-memory target data is lost. Now that we’ve collected some events related to running the stored procedure, we need to do some processing of the data. I’m going to do this in multiple steps using temporary tables so you can see what’s going on; kind of like having to “show your work” on a math test. The first step is to just cast the target data into XML so I can work with it. After that you can pull out the interesting columns, for our purposes I’m going to limit the output to just the event name, object name, stack and sql text. You can see that I’ve don a second CAST, this time of the tsql_stack column, so that I can further process this data. -- Store the XML data to a temp tableSELECT CAST( t.target_data AS XML) xml_dataINTO #xml_event_dataFROM sys.dm_xe_sessions s INNER JOIN sys.dm_xe_session_targets t    ON s.address = t.event_session_addressWHERE s.name = 'track_sprocs' SELECT * FROM #xml_event_data -- Parse the column data out of the XML blockSELECT    event_xml.value('(./@name)', 'varchar(100)') as [event_name],    event_xml.value('(./data[@name="object_name"]/value)[1]', 'varchar(255)') as [object_name],    CAST(event_xml.value('(./action[@name="tsql_stack"]/value)[1]','varchar(MAX)') as XML) as [stack_xml],    event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(max)') as [sql_text]INTO #event_dataFROM #xml_event_data    CROSS APPLY xml_data.nodes('//event') n (event_xml) SELECT * FROM #event_data event_name object_name stack_xml sql_text sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="4" offsetStart="94" offsetEnd="172" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="6" offsetStart="174" offsetEnd="-1" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements module_end sp_multiple_statements <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="0" offsetStart="0" offsetEnd="0" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements After parsing the columns it’s easier to see what is recorded. You can see that I got back two sp_statement_completed events, which makes sense given the test procedure I’m running, and I got back a single module_end for the entire statement. As described, the sql_text isn’t telling me what I really want to know for the first two events so a little extra effort is required. -- Parse the tsql stack information into columnsSELECT    event_name,    object_name,    frame_xml.value('(./@level)', 'int') as [frame_level],    frame_xml.value('(./@handle)', 'varchar(MAX)') as [sql_handle],    frame_xml.value('(./@offsetStart)', 'int') as [offset_start],    frame_xml.value('(./@offsetEnd)', 'int') as [offset_end]INTO #stack_data    FROM #event_data        CROSS APPLY    stack_xml.nodes('//frame') n (frame_xml)    SELECT * from #stack_data event_name object_name frame_level sql_handle offset_start offset_end sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 module_end sp_multiple_statements 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 Parsing out the stack information doubles the fun and I get two rows for each event. If you examine the stack from the previous table, you can see that each stack has two frames and my query is parsing each event into frames, so this is expected. There is nothing magic about the two frames, that’s just how many I get for this example, it could be fewer or more depending on your statements. The key point here is that I now have a sql_handle and the offset values for those handles, so I can use dm_exec_sql_statement to get the actual statement. Just a reminder, this DMV can only return what is in the cache – if you have old data it’s possible your statements have been ejected from the cache. “Old” is a relative term when talking about caches and can be impacted by server load and how often your statement is actually used. As with most things in life, your mileage may vary. SELECT    qs.*,     SUBSTRING(st.text, (qs.offset_start/2)+1,         ((CASE qs.offset_end          WHEN -1 THEN DATALENGTH(st.text)         ELSE qs.offset_end         END - qs.offset_start)/2) + 1) AS statement_textFROM #stack_data AS qsCROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) AS st event_name object_name frame_level sql_handle offset_start offset_end statement_text sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 SELECT 'This is the first statement' sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 SELECT 'this is the second statement' module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 C Now that looks more like what we were after, the statement_text field is showing the actual statement being run when the sp_statement_completed event occurs. You’ll notice that it’s back down to one row per event, what happened to frame 2? The short answer is, “I don’t know.” In SQL Server 2008 nothing is returned from dm_exec_sql_statement for the second frame and I believe this to be a bug; this behavior has changed in the next major release and I see the actual statement run from the client in frame 2. (In other words I see the same statement that is returned by the sql_text action  or DBCC INPUTBUFFER) There is also something odd going on with frame 1 returned from the module_end event; you can see that the offset values are both 0 and only the first letter of the statement is returned. It seems like the offset_end should actually be –1 in this case and I’m not sure why it’s not returning this correctly. This behavior is being investigated and will hopefully be corrected in the next major version. You can workaround this final oddity by ignoring the offsets and just returning the entire cached statement. SELECT    event_name,    sql_handle,    ts.textFROM #stack_data    CROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) as ts event_name sql_handle text sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' module_end 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' Obviously this gives more than you want for the sp_statement_completed events, but it’s the right information for module_end. I leave it to you to determine when this information is needed and use the workaround when appropriate. Aside: You might think it’s odd that I’m showing apparent bugs with my samples, but you’re going to see this behavior if you use this method, so you need to know about it.I’m all about transparency. Happy Eventing- Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Importing photoimpact .ab3 file database into another album program

    - by Mel F
    I have Photoimpact Album v5 which has now maxed out at 65,000 photos with database information. I can export the database and file information to a csv or dbase 4 file but I cannot find any way to import the information to a competative Photo Organization/Database product like Adobe Lightroom, ACDSee Pro3 or Photo Collector, etc... which is more modern, will work on Vista/Windows7, and not have any file limitations. Any help would be greatly appreciated. I really cannot reinput data information for 65,000 photos manually. I would use any good software that I can transfer my files and data ifno to.

    Read the article

  • Defense Manpower Data Center Wins Award for Excellence in the Workforce

    - by Peggy Chen
    The Defense Manpower Data Center milConnect website recently won the 2012 Excellence.gov Award for Excellence in the Workforce. Defense Manpower Data Center milConnect is a centralized, online resource that provides military service members (active and retired) and their families (over 42 million in total) quick access to their profile, health care enrollments, benefits, and other military-related topics. An easy to use, safe and secure website, milConnect also provides service members with convenient access their personnel and service-related information. The self-service website allow users to quickly and easily find and, where applicable, update their information in the Defense Eligibility Reporting System (DEERS) and milConnect transmits information to and from one reliable source safely and securely.  The Defense Manpower Data Center (DMDC) maintains the largest, most comprehensive central repository of personnel, manpower, casualty, pay, entitlement, personnel security, person identity and attributes, survey, testing, training, and financial data in the Department of Defense (DoD).  This is one of the largest systems of record in the world. milConnect had the challenge of modernizing the user experience for over 42 million users. With records in over 22 applications and 25 interfaces in hundreds of existing systems, milConnect needed to reduce the complexity of multiple authentication sources as well as consolidating access to existing systems with sensitive information. It accomplished this using a service-orientated architecture, enterprise security and access and identity management for self-service access on a massive scale. By providing 24x7x365 secure access and handling over 5 million transactions daily, not only has milConnect, built on Oracle WebCenter, streamlined and improved the customer experience for military personnel and families. it has also done so while cutting costs, allowing self-service access, and promoting electronic government. Congrats to Defense Manpower Data Center and milConnect! 

    Read the article

  • Internet Explorer 10 Crashing With BEX Error (Cannot reset to default)

    - by Abdul Wajid
    I am facing an issue with my Windows Server 2012 IE 10. Every time access local intranet webpage that opens a new windows automatically or any web in new tab or window, it crashes with below error: Problem signature: Problem Event Name: BEX Application Name: IEXPLORE.EXE Application Version: 10.0.9200.16384 Application Timestamp: 50107ee0 Fault Module Name: StackHash_1903 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: PCH_03_FROM_IEFRAME+0x0026B982 Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1033 Additional Information 1: 1903 Additional Information 2: 1903bfd460d0d45dac22ad6eb30cc258 Additional Information 3: 6536 Additional Information 4: 6536faeff1b2d044aae2c2dcb49895a2 I also tried to reset the configuration to default but it is also failing. Any idea how can I resolve the issue? I also ran the ediagcmd.exe and uploaded the CAB file. Please see this link to download that CAB file. Thank You

    Read the article

  • E-Business Suite Sessions at Sangam 2013 in Hyderabad

    - by Sara Woodhull
    The Sangam 2013 conference, sponsored jointly by the All-India Oracle Users' Group (AIOUG) and India Oracle Applucations Users Group (IOAUG), will be in Hyderabad, India on November 8-9, 2013.  This year, the E-Business Suite Applications Technology Group (ATG) will offer two speaker sessions and a walk-in usability test of upcoming EBS user interface features.  It's only about two weeks away, so make your plans to attend if you are in India. Sessions Oracle E-Business Suite Technology: Latest Features and Roadmap Veshaal Singh, Senior Director, ATG Development Friday, Nov. 9, 11:00-12:00 This Oracle development session provides an overview of Oracle's product strategy for Oracle E-Business Suite technology, the capabilities and associated business benefits of recent releases, and a review of capabilities on the product roadmap. This is the cornerstone session for Oracle E-Business Suite technology. Come hear about the latest new usability enhancements of the user interface; systems administration and configuration management tools; security-related updates; and tools and options for extending, customizing, and integrating Oracle E-Business Suite with other applications. Integration Options for Oracle E-Business Suite Rekha Ayothi, Lead Product Manager, ATG Friday, Nov. 9, 2:00-3:00 In this Oracle development session, you will get an understanding of how, when and where you can leverage Oracle's integration technologies to connect end-to-end business processes across your enterprise, including your Oracle Applications portfolio. This session offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other options for integrating Oracle E-Business Suite with other applications. Usability Testing There will be multiple opportunities to participate in usability testing at Sangam '13.  The User Experience team is running a one-on-one usability study that requires advance registration.  In addition, we will be hosting a special walk-in usability lab to get feedback for new Oracle E-Business Suite OA Framework features.  The walk-in lab is a shorter usability experience that does not require any pre-registration.  In both cases, Oracle wants your feedback!  Even if you only have a few minutes, come by the User Experience Lab, meet the team, and try the walk-in lab.

    Read the article

  • Difference between Detach/Attach and Restore/BackUp a DB

    - by SAMIR BHOGAYTA
    Transact-SQL BACKUP/RESTORE is the normal method for database backup and recovery. Databases can be backed up while online. The backup file size is usually smaller than the database files since only used pages are backed up. Also, in the FULL or BULK_LOGGED recovery model, you can reduce potential data loss by performing transaction log backups. Detaching a database removes the database from SQL Server while leaving the physical database files intact. This allows you to rename or move the physical files and then re-attach. Although one could perform cold backups using this technique, detach/attach isn't really intended to be used as a backup/recovery process. Commonly it is recommended that you use BACKUP/RESTORE for disaster recovery (DR) scenario and copying data from one location to another. But this is not absolute, sometimes for a very large database, if you want to move it from one location to another, backup/restore process may spend a lot of time which you do not like, in this case, detaching/attaching a database is a better way since you can attach a workable database very fast. But you need to aware that detaching a database will bring it offline for a short time and detaching/attaching does not provide DR function. For more information about detaching and attaching databases, you can refer to: Detaching and Attaching Databases http://technet.microsoft.com/en-us/library/ms190794.aspx

    Read the article

  • SSH ForceCommand example - require a user to enter a token before getting shell access?

    - by consolibyte
    I'd like to prompt a user for some piece of information before they get to their BASH shell when they're logging in via SSH. Ideally, I'd like to execute a script which prompts them for information, check that the information is correct, and then if it is drop them to a shell. So, think: ssh [email protected] password: xxxx do you agree to the terms and conditions of use? enter yes or no: yes OK, here's your shell: # Can anyone provide an example of how to do something like this?

    Read the article

  • Security vulnerability and nda's [closed]

    - by Chris
    I want to propose a situation and gain insight from the communities thoughts. A customer, call them Customer X has a contract with a vendor, Vendor Y to provide an application and services. Customer X discovers a serious authentication vulnerability in Vendor Y's software. Vendor Y and Customer X has a discussion. Vendor Y acknowledges/confirms flaw. Vendor Y confirms they will put effort to fix. Customer X requests Vendor Y to inform all customers impacted by this. Vendor agrees. Fast forward 2 months, and the flaw has not been fixed. Patches were applied to mitigate but the flaw still exists. However, no customers were informed of issue. At this point customer X contacts Vendor Y to determine the status and understand why customer's were not informed. The vendor nicely reminds the customer they are under an NDA and are still working on the issue. A few questions/discussion pieces out of this. By discussing a software flaw with a vendor, does this imply you have agreed to any type of NDA disclosure? Additionally, what rights as does Customer X have to inform other customers of this vulnerability if vendor does not appear willing to comply? I (the op) am under the impression that when this situation occurs, you are supposed to notify vendor of issue, provide them with ample time to respond and if no response you are able to do what you wish with the information. I am thinking back to the MIT/subway incident where they contacted transit authorities, transit authorities didn't respond in a timely fashion so the students disclosed the information publicly on their own. Few things to note about this: I am not the customer in above situation, also lets assume for purposes of keeping discussion inline that customer X has no intentions of disclosing information, they are merely concerned and interested in making sure other customers are aware until it is fixed so they do not expierence a major security breach. (More information can be supplied if needed to add context to question. )

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Windows Server 2008 R2 time keeps changing unexpectedly

    - by lmhost
    There is a serious problem with one of my servers time. At some random hours (5:59 localtime in the following example) it goes 1 hour back and keeps doing this again and again next times it reaches 5:59. Like this: 5:59-4:59 … 5:59- 4:59 and so on. It stays in the loop until I manually update its time. The server OS is Windows 2008 R2 Standard x64, acting as a standalone web server on the internet. time zone (UTC-05:00). system log: Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None all of the above entries have same content: The system time has changed to ?2012?-?04?-?13T21:59:34.500000000Z from ?2012?-?04?-?13T22:59:34.500000000Z. Details XML View: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Microsoft-Windows-Kernel-General" Guid="{A68CA8B7-004F-D7B6-A698-07E2DE0F1F5D}" /> <EventID>1</EventID> <Version>0</Version> <Level>4</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000010</Keywords> <TimeCreated SystemTime="2012-04-13T21:59:34.500000000Z" /> <EventRecordID>4060</EventRecordID> <Correlation /> <Execution ProcessID="4" ThreadID="80" /> <Channel>System</Channel> <Computer>********</Computer> <Security UserID="S-1-5-18" /> </System> - <EventData> <Data Name="NewTime">2012-04-13T21:59:34.500000000Z</Data> <Data Name="OldTime">2012-04-13T22:59:34.500000000Z</Data> </EventData> </Event> Any idea about what’s going on? Thanks

    Read the article

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • Justification for learning/implementing newer Microsoft technologies

    - by Darren
    I work at a large healthcare organization as a mid-level software developer. I have over 10 years experience in the IT industry using Microsoft technologies (ASP.NET & SQL Server). When I go to conferences, code camps, .net user group meetings, I hear of all kinds of new tools and technologies: MVC, LINQ, Entity Framework, WCF Web Services, etc. I guess you could say I'm in my comfort zone using the same old stuff from asp.net 2.0. I use typed datasets for my data access layer. I use web forms and feature rich server controls with master pages. I know how to use plain old SQL and create queries in my typed datasets to get at data my applications need. Throughout my career, I'm always sensitive to not become obsolete with my skill set. What I currently use works fine and my development time is fast. But I'm concerned that if I were to be laid off, I would be asked in interviews how many MVC apps I've written. Or how I am with LINQ or WCF web services. I know that it doesn't matter how many conferences, books, or videos I watch on some new technology...I have to implement/use it or it simply won't sink in. Also, managers who interview don't care how much someone reads up on something, only real use and experience with a technology. I have a new project to write. I've gone to my manager and have asked for additional time for the project for learning/implementing technology I may not be familiar with. Our organization encourages its employees to "learn and grow" and to continue are education. But I always get resistance when I ask for more time to ramp up on something new to implement. My manager is asking for concrete business reasons for implementing these new technologies. I don't have business reasons. My reasons are because I don't want to become obsolete. I could say it would make the project more maintainable in the future by other developers since at some point people could stop using these older technologies, but that' about all I can think of. Does Linq/Entity Framework/MCV apps perform better? So much so that the customers (users in departments I'm creating this app for) need? I doubt it. I'm interested in you guy's thoughts on this. Do many of you have similar plights with trying to use newer upcoming technologies? I doubt I'm on the bleeding edge of technology, either. Are there "business reasons" that you would bring to light for using these technologies? Thanks in advance! Sorry for the long wall of text.

    Read the article

  • Designing configuration for subobjects

    - by Stefano Borini
    I have the following situation: I have a class (let's call it Main) encapsulating a complex process. This class in turn orchestrates a sequence of subalgorithms (AlgoA, AlgoB), each one represented by an individual class. To configure Main, I have a configuration stored into a configuration object MainConfig. This object contains all the config information for AlgoA and AlgoB with their specific parameters. AlgoA has no interest to the information relative to the configuration of AlgoB, so technically I could have (and in practice I have) a contained MainConfig.AlgoAConfig and MainConfig.AlgoBConfig instances, and initialize as AlgoA(MainConfig.AlgoAConfig) and AlgoB(MainConfig.AlgoBConfig). The problem is that there is some common configuration data. One example is the printLevel. I currently have MainConfig.printLevel. I need to propagate this information to both AlgoA and AlgoB, because they have to know how much to print. MainConfig also needs to know how much to print. So the solutions available are I pass the MainConfig to AlgoA and AlgoB. This way, AlgoA has technically access to the whole configuration (even that of AlgoB) and is less self-contained I copy the MainConfig.printLevel into AlgoAConfig and AlgoBConfig, so I basically have three printLevel information repeated. I create a third configuration class PrintingConfig. I have an instance variable MainConfig.printingConfig, and then pass to AlgoA both MainConfig.AlgoAConfig and MainConfig.printingConfig. Have you ever found this situation? How did you solve it ? Which one is stylistically clearer to a new reader of the code ?

    Read the article

  • How to install Network Adapter Drivers for Atheros AR8161/8165 PCI-E Gigabit Ethernet Controller (NDIS 6.20) Ubuntu 12.04

    - by Jessica Burnett
    How can I install drivers for 64-bit Atheros AR8161/8165 PCI-E Gigabit Ethernet Controller (NDIS 6.20) for Ubuntu 12.04. I dual boot Windows7/Ubuntu 12.04 drivers work for 64-bit Windows 7. lspic -nn: 00:00.0 Host bridge [0600]: Intel Corporation Ivy Bridge DRAM Controller [8086:0154] (rev 09) 00:01.0 PCI bridge [0604]: Intel Corporation Ivy Bridge PCI Express Root Port [8086:0151] (rev 09) 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) 00:14.0 USB controller [0c03]: Intel Corporation Panther Point USB xHCI Host Controller [8086:1e31] (rev 04) 00:16.0 Communication controller [0780]: Intel Corporation Panther Point MEI Controller #1 [8086:1e3a] (rev 04) 00:1a.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #2 [8086:1e2d] (rev 04) 00:1b.0 Audio device [0403]: Intel Corporation Panther Point High Definition Audio Controller [8086:1e20] (rev 04) 00:1c.0 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 1 [8086:1e10] (rev c4) 00:1c.1 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 2 [8086:1e12] (rev c4) 00:1c.3 PCI bridge [0604]: Intel Corporation Panther Point PCI Express Root Port 4 [8086:1e16] (rev c4) 00:1d.0 USB controller [0c03]: Intel Corporation Panther Point USB Enhanced Host Controller #1 [8086:1e26] (rev 04) 00:1f.0 ISA bridge [0601]: Intel Corporation Panther Point LPC Controller [8086:1e59] (rev 04) 00:1f.2 SATA controller [0106]: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] [8086:1e03] (rev 04) 00:1f.3 SMBus [0c05]: Intel Corporation Panther Point SMBus Controller [8086:1e22] (rev 04) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:0de9] (rev a1) 02:00.0 Ethernet controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 08) 03:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 2200 [8086:0891] (rev c4) 04:00.0 System peripheral [0880]: JMicron Technology Corp. SD/MMC Host Controller [197b:2392] (rev 30) 04:00.2 SD Host controller [0805]: JMicron Technology Corp. Standard SD Host Controller [197b:2391] (rev 30) 04:00.3 System peripheral [0880]: JMicron Technology Corp. MS Host Controller [197b:2393] (rev 30) 04:00.4 System peripheral [0880]: JMicron Technology Corp. xD Host Controller [197b:2394] (rev 30) sudo lshw -c network *-network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 08 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi msix bus_master cap_list configuration: latency=0 resources: memory:d3a00000-d3a3ffff ioport:2000(size=128) *-network description: Wireless interface product: Centrino Wireless-N 2200 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: c4 serial: 9c:4e:36:14:d4:7c width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic firmware=18.168.6.1 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:45 memory:d3900000-d3901fff I also tried Manually configuring wired connection. Nether wired or wireless connects

    Read the article

  • Transition from web-app development to telecom Integration network (IN) layer development

    - by SIJAR
    How difficult it is for someone who has developed web-application using J2EE, Spring etc technology to develop an application for Telecommunication Integration network (IN) layer, the telecom product is to be developed using SS7 stack, JAIN API, J2EE technology. How vital is the knowledge of telecommunication sector? Does anyone anticipate growth in telecommunication sector, what future lies for web development?

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >