Search Results

Search found 6972 results on 279 pages for 'sharepoint 2007'.

Page 267/279 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • mysql: Bind on unix socket: Permission denied

    - by Alex
    Can't start mysql with: sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 13:40:48 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:40:54 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended /srv/mysql/logs/mysqld-myDB.log: 120222 13:43:53 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:43:53 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Table 'plugin' is read only 120222 13:43:53 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 120222 13:43:53 InnoDB: Completed initialization of buffer pool 120222 13:43:53 InnoDB: Started; log sequence number 32 4232720908 120222 13:43:53 [ERROR] Can't start server : Bind on unix socket: Permission denied 120222 13:43:53 [ERROR] Do you already have another mysqld server running on socket: /srv/mysql/sockets/mysql-myDB.sock ? 120222 13:43:53 [ERROR] Aborting 120222 13:43:53 InnoDB: Starting shutdown... One instance mysqld is running: $ ps aux | grep mysql mysql 1093 0.0 0.2 169972 18700 ? Ssl 11:50 0:02 /usr/sbin/mysqld $ Port 3700 is available: $ netstat -a | grep 3700 $ Directory with sockets is empty: $ ls /srv/mysql/sockets/ $ There are all permissions: $ ls -l /srv/mysql/ total 20 drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:28 logs drwxrwxrwx 13 mysql mysql 4096 2012-02-22 13:44 myDB drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 pids drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 sockets drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:25 version Apparmor config: $cat /etc/apparmor.d/usr.sbin.mysqld # vim:syntax=apparmor # Last Modified: Tue Jun 19 17:37:30 2007 #include <tunables/global> /usr/sbin/mysqld flags=(complain) { #include <abstractions/base> #include <abstractions/nameservice> #include <abstractions/user-tmp> #include <abstractions/mysql> #include <abstractions/winbind> capability dac_override, capability sys_resource, capability setgid, capability setuid, network tcp, /etc/hosts.allow r, /etc/hosts.deny r, /etc/mysql/*.pem r, /etc/mysql/conf.d/ r, /etc/mysql/conf.d/* r, /etc/mysql/*.cnf r, /usr/lib/mysql/plugin/ r, /usr/lib/mysql/plugin/*.so* mr, /usr/sbin/mysqld mr, /usr/share/mysql/** r, /var/log/mysql.log rw, /var/log/mysql.err rw, /var/lib/mysql/ r, /var/lib/mysql/** rwk, /var/log/mysql/ r, /var/log/mysql/* rw, /{,var/}run/mysqld/mysqld.pid w, /{,var/}run/mysqld/mysqld.sock w, /srv/mysql/ r, /srv/mysql/** rwk, /sys/devices/system/cpu/ r, # Site-specific additions and overrides. See local/README for details. #include <local/usr.sbin.mysqld> } Any suggestions? UPD1: $ touch /srv/mysql/sockets/mysql-myDB.sock $ sudo chown mysql:mysql /srv/mysql/sockets/mysql-myDB.sock $ ls -l /srv/mysql/sockets/mysql-myDB.sock -rw-rw-r-- 1 mysql mysql 0 2012-02-22 14:29 /srv/mysql/sockets/mysql-myDB.sock $ sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 14:30:18 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. 120222 14:30:18 mysqld_safe Logging to '/srv/mysql/logs/mysqld-myDB.log'. 120222 14:30:18 mysqld_safe Starting mysqld daemon with databases from /srv/mysqlmyDB 120222 14:30:24 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended $ ls -l /srv/mysql/sockets/mysql-myDB.sock ls: cannot access /srv/mysql/sockets/mysql-myDB.sock: No such file or directory $ UPD2: $ sudo netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1093/mysqld unix 2 [ ACC ] STREAM LISTENING 5912 1093/mysqld /var/run/mysqld/mysqld.sock $ sudo lsof | grep /srv/mysql/sockets/mysql-myDB.sock lsof: WARNING: can't stat() fuse.gvfs-fuse-daemon file system /home/sears/.gvfs Output information may be incomplete. UPD3: $ cat /etc/mysql/my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Java error starting with "log4j:WARN No appenders could be found for logger" in ZuckerReports SugarC

    - by Tom McDonnell
    Greetings all. I apologise for posting this problem here, but I do so in desperation after receiving no response on the SugarCRM forums. Even if a reader is unfamiliar with ZuckerReports or SugarCRM some general advice on Java may be of use to me. I have installed ZuckerReports v1.12 in SugarCRM 5.5.1. When I attempt to run a report I get the following error message. cmdline: javaw -classpath "custom/ZuckerReports/resources/;custom/ZuckerReports/resources/contact_counts_by_first_name.jasper_files/;modules/ZuckerReports/jasper/ant-1.7.1.jar;modules/ZuckerReports/jasper/antlr-2.7.6.jar;modules/ZuckerReports/jasper/asm-attrs.jar;modules/ZuckerReports/jasper/asm.jar;modules/ZuckerReports/jasper/barbecue-1.5-beta1.jar;modules/ZuckerReports/jasper/barcode4j-2.0.jar;modules/ZuckerReports/jasper/batik-anim.jar;modules/ZuckerReports/jasper/batik-awt-util.jar;modules/ZuckerReports/jasper/batik-bridge.jar;modules/ZuckerReports/jasper/batik-css.jar;modules/ZuckerReports/jasper/batik-dom.jar;modules/ZuckerReports/jasper/batik-ext.jar;modules/ZuckerReports/jasper/batik-gvt.jar;modules/ZuckerReports/jasper/batik-parser.jar;modules/ZuckerReports/jasper/batik-script.jar;modules/ZuckerReports/jasper/batik-svg-dom.jar;modules/ZuckerReports/jasper/batik-svggen.jar;modules/ZuckerReports/jasper/batik-util.jar;modules/ZuckerReports/jasper/batik-xml.jar;modules/ZuckerReports/jasper/bcel-5.2.jar;modules/ZuckerReports/jasper/bsh-2.0b4.jar;modules/ZuckerReports/jasper/castor-1.2.jar;modules/ZuckerReports/jasper/cglib-2.1.jar;modules/ZuckerReports/jasper/cincom-jr-xmla.jar;modules/ZuckerReports/jasper/commons-beanutils-1.8.2.jar;modules/ZuckerReports/jasper/commons-collections-3.2.1.jar;modules/ZuckerReports/jasper/commons-dbcp-1.2.2.jar;modules/ZuckerReports/jasper/commons-digester-1.7.jar;modules/ZuckerReports/jasper/commons-javaflow-20060411.jar;modules/ZuckerReports/jasper/commons-logging-1.1.jar;modules/ZuckerReports/jasper/commons-math-1.0.jar;modules/ZuckerReports/jasper/commons-pool-1.3.jar;modules/ZuckerReports/jasper/commons-vfs-1.0.jar;modules/ZuckerReports/jasper/dom4j-1.6.jar;modules/ZuckerReports/jasper/ehcache-1.1.jar;modules/ZuckerReports/jasper/eigenbase-properties-1.1.0.10924.jar;modules/ZuckerReports/jasper/eigenbase-resgen-1.3.0.11873.jar;modules/ZuckerReports/jasper/eigenbase-xom-1.3.0.11999.jar;modules/ZuckerReports/jasper/ejb3-persistence.jar;modules/ZuckerReports/jasper/groovy-all-1.5.5.jar;modules/ZuckerReports/jasper/hibernate-annotations.jar;modules/ZuckerReports/jasper/hibernate-commons-annotations.jar;modules/ZuckerReports/jasper/hibernate3.jar;modules/ZuckerReports/jasper/hsqldb-1.8.0-10.jar;modules/ZuckerReports/jasper/iText-2.1.0.jar;modules/ZuckerReports/jasper/iTextAsian.jar;modules/ZuckerReports/jasper/jakarta-bcel-20050813.jar;modules/ZuckerReports/jasper/jasperreports-3.7.1.jar;modules/ZuckerReports/jasper/jasperreports-chart-themes-3.6.2.jar;modules/ZuckerReports/jasper/jasperreports-extensions-3.5.3.jar;modules/ZuckerReports/jasper/jasperreports-fonts-3.6.1.jar;modules/ZuckerReports/jasper/javacup.jar;modules/ZuckerReports/jasper/javassist-3.4.GA.jar;modules/ZuckerReports/jasper/jaxen-1.1.1.jar;modules/ZuckerReports/jasper/jcommon-1.0.15.jar;modules/ZuckerReports/jasper/jdt-compiler-3.1.1.jar;modules/ZuckerReports/jasper/jfreechart-1.0.12.jar;modules/ZuckerReports/jasper/jpa.jar;modules/ZuckerReports/jasper/js_activation-1.1.jar;modules/ZuckerReports/jasper/js_axis-1.4patched.jar;modules/ZuckerReports/jasper/js_commons-codec-1.3.jar;modules/ZuckerReports/jasper/js_commons-discovery-0.2.jar;modules/ZuckerReports/jasper/js_commons-httpclient-3.1.jar;modules/ZuckerReports/jasper/js_jasperserver-common-ws-3.5.0.jar;modules/ZuckerReports/jasper/js_jaxrpc.jar;modules/ZuckerReports/jasper/js_mail-1.4.jar;modules/ZuckerReports/jasper/js_saaj-api-1.3.jar;modules/ZuckerReports/jasper/js_wsdl4j-1.5.1.jar;modules/ZuckerReports/jasper/jta.jar;modules/ZuckerReports/jasper/jxl-2.6.jar;modules/ZuckerReports/jasper/log4j-1.2.15.jar;modules/ZuckerReports/jasper/mondrian-3.1.1.12687-Jaspersoft.jar;modules/ZuckerReports/jasper/mysql-connector-java-3.1.11-bin.jar;modules/ZuckerReports/jasper/olap4j-0.9.7.145.jar;modules/ZuckerReports/jasper/png-encoder-1.5.jar;modules/ZuckerReports/jasper/poi-3.2-FINAL-20081019.jar;modules/ZuckerReports/jasper/rex-20080421.jar;modules/ZuckerReports/jasper/rhino-1.7R1.jar;modules/ZuckerReports/jasper/saaj-api-1.3.jar;modules/ZuckerReports/jasper/slf4j-api.jar;modules/ZuckerReports/jasper/slf4j-log4j12.jar;modules/ZuckerReports/jasper/spring.jar;modules/ZuckerReports/jasper/sqleonardo-2007.03.jar;modules/ZuckerReports/jasper/swingx-2007_10_07.jar;modules/ZuckerReports/jasper/xml-apis-ext.jar;modules/ZuckerReports/jasper/xml-apis.jar;modules/ZuckerReports/jasper/zuckerreports-1.0.jar" at.go_mobile.zuckerreports.JasperBatchMain custom/ZuckerReports/temp/aff882c1-684b-d2de-403e-4be367bc2f5f/cmd.properties 2&1 JasperBatchMain :: loading jasper design custom/ZuckerReports/resources/contact_counts_by_first_name.jasper JasperBatchMain :: getParameterValue(REPORT_PARAMETERS_MAP, java.util.Map) = null JasperBatchMain :: getParameterValue(JASPER_REPORT, net.sf.jasperreports.engine.JasperReport) = null JasperBatchMain :: getParameterValue(REPORT_CONNECTION, java.sql.Connection) = null JasperBatchMain :: getParameterValue(REPORT_MAX_COUNT, java.lang.Integer) = null JasperBatchMain :: getParameterValue(REPORT_DATA_SOURCE, net.sf.jasperreports.engine.JRDataSource) = null JasperBatchMain :: getParameterValue(REPORT_SCRIPTLET, net.sf.jasperreports.engine.JRAbstractScriptlet) = null JasperBatchMain :: getParameterValue(REPORT_LOCALE, java.util.Locale) = null JasperBatchMain :: getParameterValue(REPORT_RESOURCE_BUNDLE, java.util.ResourceBundle) = null JasperBatchMain :: getParameterValue(REPORT_TIME_ZONE, java.util.TimeZone) = null JasperBatchMain :: getParameterValue(REPORT_FORMAT_FACTORY, net.sf.jasperreports.engine.util.FormatFactory) = null JasperBatchMain :: getParameterValue(REPORT_CLASS_LOADER, java.lang.ClassLoader) = null JasperBatchMain :: getParameterValue(REPORT_URL_HANDLER_FACTORY, java.net.URLStreamHandlerFactory) = null JasperBatchMain :: getParameterValue(REPORT_FILE_RESOLVER, net.sf.jasperreports.engine.util.FileResolver) = null JasperBatchMain :: getParameterValue(REPORT_VIRTUALIZER, net.sf.jasperreports.engine.JRVirtualizer) = null JasperBatchMain :: getParameterValue(IS_IGNORE_PAGINATION, java.lang.Boolean) = null JasperBatchMain :: getParameterValue(REPORT_TEMPLATES, java.util.Collection) = null log4j:WARN No appenders could be found for logger (net.sf.jasperreports.extensions.ExtensionsEnviron ment). log4j:WARN Please initialize the log4j system properly. Exception in thread "main" java.lang.IllegalArgumentException: Null 'key' argument. at org.jfree.data.DefaultKeyedValues.setValue(Default KeyedValues.java:229) at org.jfree.data.DefaultKeyedValues2D.setValue(Defau ltKeyedValues2D.java:337) at org.jfree.data.DefaultKeyedValues2D.addValue(Defau ltKeyedValues2D.java:303) at org.jfree.data.category.DefaultCategoryDataset.add Value(DefaultCategoryDataset.java:222) at net.sf.jasperreports.charts.fill.JRFillCategoryDat aset.customIncrement(JRFillCategoryDataset.java:14 3) at net.sf.jasperreports.engine.fill.JRFillElementData set.increment(JRFillElementDataset.java:175) at net.sf.jasperreports.engine.fill.JRCalculator.calc ulateVariables(JRCalculator.java:148) at net.sf.jasperreports.engine.fill.JRVerticalFiller. fillDetail(JRVerticalFiller.java:736) at net.sf.jasperreports.engine.fill.JRVerticalFiller. fillReportContent(JRVerticalFiller.java:272) at net.sf.jasperreports.engine.fill.JRVerticalFiller. fillReport(JRVerticalFiller.java:114) at net.sf.jasperreports.engine.fill.JRBaseFiller.fill (JRBaseFiller.java:923) at net.sf.jasperreports.engine.fill.JRBaseFiller.fill (JRBaseFiller.java:826) at net.sf.jasperreports.engine.fill.JRFiller.fillRepo rt(JRFiller.java:59) at at.go_mobile.zuckerreports.JasperBatchMain.main(Ja sperBatchMain.java:126) The same report runs correctly in another SugarCRM installation on the same server. The installation in which the report runs correctly is of the same version, and has the same version of the ZuckerReports module. The report previously ran correctly on both installations. I think that the only changes that have been made on the installation in which the report now does not work since the report was last successfully run are the additions of a few custom fields in the Contacts module. These changes should have nothing to do with ZuckerReports. I have tried uninstalling and reinstalling the ZuckerReports module, but the problem remains. A google search for the warnings given in the error message ie. * log4j:WARN No appenders could be found for logger (net.sf.jasperreports.extensions.ExtensionsEnviron ment). * log4j:WARN Please initialize the log4j system properly. Returns a few links (not specific to ZuckerReports) with tips similar to the following: * log4j.properties or log4j.xml needs to be on the classpath where log4j can find it. I cannot find a file with either of those names anywhere on my server, and yet the report can be run successfully on one of my SugarCRM installations. So I figure log4j must be being configured another way. Can anyone suggest a way to solve this problem? Or explain how I might discover how log4j is configured in ZuckerReports? Or explain how I might compare the working with the non-working installation in order to help find a solution? (I have tried searching for files containing "log4j" in both installations and comparing but all I can find are .jar files (nothing I can read with a text editor), and the .jar files found in each installation appear to be the same.)

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • EC2 instance suddenly refusing SSH connections and won't respond to ping

    - by Chris
    My instance was running fine and this morning I was able to access a Ruby on Rails app hosted on it. An hour later I suddenly wasn't able to access my site, my SSH connection attempts were refused and the server wasn't even responding to ping. I didn't change anything on my system during that hour and reboots aren't fixing it. I've never had any problems connecting or pinging the system before. Can someone please help? This is on my production system! OS: CentOS 5 AMI ID: ami-10b55379 Type: m1.small [] ~% ssh -v *****@meeteor.com OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to meeteor.com [184.73.235.191] port 22. debug1: connect to address 184.73.235.191 port 22: Connection refused ssh: connect to host meeteor.com port 22: Connection refused [] ~% ping meeteor.com PING meeteor.com (184.73.235.191): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ^C --- meeteor.com ping statistics --- 4 packets transmitted, 0 packets received, 100.0% packet loss [] ~% ========= System Log ========= Restarting system. Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled Built 1 zonelists Kernel command line: root=/dev/sda1 ro 4 Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 65536 bytes) Xen reported: 2599.998 MHz processor. Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Software IO TLB disabled vmalloc area: ee000000-f53fe000, maxmem 2d7fe000 Memory: 1718700k/1748992k available (1958k kernel code, 20948k reserved, 620k data, 144k init, 1003528k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 5202.30 BogoMIPS (lpj=26011526) Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 1024K (64 bytes/line) Checking 'hlt' instruction... OK. Brought up 1 CPUs migration_cost=0 Grant table initialized NET: Registered protocol family 16 Brought up 1 CPUs xen_mem: Initialising balloon driver. highmem bounce pool size: 64 pages VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered i8042.c: No controller found. RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Xen virtual console successfully installed as tty1 Event-channel device installed. netfront: Initialising virtual ethernet driver. mice: PS/2 mouse device common for all mice md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: bitmap version 4.39 NET: Registered protocol family 2 Registering block device major 8 IP route cache hash table entries: 65536 (order: 6, 262144 bytes) TCP established hash table entries: 262144 (order: 9, 2097152 bytes) TCP bind hash table entries: 65536 (order: 7, 524288 bytes) TCP: Hash tables configured (established 262144 bind 65536) TCP reno registered TCP bic registered NET: Registered protocol family 1 NET: Registered protocol family 17 NET: Registered protocol family 15 Using IPI No-Shortcut mode md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE. kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. Freeing unused kernel memory: 144k freed *************************************************************** *************************************************************** ** WARNING: Currently emulating unsupported memory accesses ** ** in /lib/tls glibc libraries. The emulation is ** ** slow. To ensure full performance you should ** ** install a 'xen-friendly' (nosegneg) version of ** ** the library, or disable tls support by executing ** ** the following as root: ** ** mv /lib/tls /lib/tls.disabled ** ** Offending process: init (pid=1) ** *************************************************************** *************************************************************** Pausing... 5Pausing... 4Pausing... 3Pausing... 2Pausing... 1Continuing... INIT: version 2.86 booting Welcome to CentOS release 5.4 (Final) Press 'I' to enter interactive startup. Setting clock : Fri Oct 1 14:35:26 EDT 2010 [ OK ] Starting udev: [ OK ] Setting hostname localhost.localdomain: [ OK ] No devices found Setting up Logical Volume Management: [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1 /dev/sda1: clean, 275424/1310720 files, 1161123/2621440 blocks [ OK ] Remounting root filesystem in read-write mode: [ OK ] Mounting local filesystems: [ OK ] Enabling local filesystem quotas: [ OK ] Enabling /etc/fstab swaps: [ OK ] INIT: Entering runlevel: 4 Entering non-interactive startup Starting background readahead: [ OK ] Applying ip6tables firewall rules: modprobe: FATAL: Module ip6_tables not found. ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter' Error occurred at line: 3 Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information. [FAILED] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... done. [ OK ] Starting auditd: [FAILED] Starting irqbalance: [ OK ] Starting portmap: [ OK ] FATAL: Module lockd not found. Starting NFS statd: [ OK ] Starting RPC idmapd: FATAL: Module sunrpc not found. FATAL: Error running install command for sunrpc Error: RPC MTAB does not exist. Starting system message bus: [ OK ] Starting Bluetooth services:[ OK ] [ OK ] Can't open RFCOMM control socket: Address family not supported by protocol Mounting other filesystems: [ OK ] Starting PC/SC smart card daemon (pcscd): [ OK ] Starting hidd: Can't open HIDP control socket: Address family not supported by protocol [FAILED] Starting autofs: Starting automount: automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required. [FAILED] [FAILED] Starting sshd: [ OK ] Starting cups: [ OK ] Starting sendmail: [ OK ] Starting sm-client: [ OK ] Starting console mouse services: no console device found[FAILED] Starting crond: [ OK ] Starting xfs: [ OK ] Starting anacron: [ OK ] Starting atd: [ OK ] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 390 100 390 0 0 58130 0 --:--:-- --:--:-- --:--:-- 58130 100 390 100 390 0 0 56984 0 --:--:-- --:--:-- --:--:-- 0 Starting yum-updatesd: [ OK ] Starting Avahi daemon... [ OK ] Starting HAL daemon: [ OK ] Starting OSSEC: [ OK ] Starting smartd: [ OK ] c CentOS release 5.4 (Final) Kernel 2.6.16-xenU on an i686 domU-12-31-39-00-C4-97 login: INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes

    Read the article

  • DNS and name server in centos 6.3 64 bit is not pinged out side

    - by user135855
    I got a problem with centOS 6.3 64-bit. I want to setup my nameserver with bind here. I am listing all my configuration [root@izyon92 ~]# cat/etc/hosts -------------- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 182.19.26.92 izyon92.zyonize1.com izyon92 [root@izyon92 ~]# cat /etc/sysconfig/network --------------------------------------------- NETWORKING=yes HOSTNAME=izyon92.zyonize1.com GATEWAY=182.19.26.89 [root@izyon92 ~]# cat /etc/resolv.conf -------------------------------------------- # Generated by NetworkManager search zyonize1.com nameserver 182.19.26.92 [root@izyon92 ~]# cat /etc/named.conf -------------------------------------------- // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { #listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { none; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { 182.19.26.92; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; [root@izyon92 ~]# cat /etc/named.rfc1912.zones -------------------------------------------------- // named.rfc1912.zones: // // Provided by Red Hat caching-nameserver package // // ISC BIND named zone configuration for zones recommended by // RFC 1912 section 4.1 : localhost TLDs and address zones // and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt // (c)2007 R W Franks // // See /usr/share/doc/bind*/sample/ for example named configuration files. // zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "zyonize1.com" { type master; file "/var/named/zyonize.com.hosts"; }; [root@izyon92 ~]# cat /var/named/zyonize.com.hosts --------------------------------------------------------- $ttl 38400 zyonize1.com. IN SOA 182.19.26.92. dev\.izyon.gmail.com. ( 1347436958 10800 3600 604800 38400 ) zyonize1.com. IN NS 182.19.26.92. zyonize1.com. IN A 182.19.26.92 www.zyonize1.com. IN A 182.19.26.92 izyon92.zyonize1.com. IN A 182.19.26.92 I have disabled selinux and stopped iptables. dig and nslookup is working fine in the same machine [root@izyon92 ~]# dig zyonize1.com ---------------------------------------- ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> zyonize1.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55751 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zyonize1.com. IN A ;; ANSWER SECTION: zyonize1.com. 38400 IN A 182.19.26.92 ;; AUTHORITY SECTION: zyonize1.com. 38400 IN NS 182.19.26.92. ;; Query time: 0 msec ;; SERVER: 182.19.26.92#53(182.19.26.92) ;; WHEN: Fri Sep 14 00:09:19 2012 ;; MSG SIZE rcvd: 72 [root@izyon92 ~]# nslookup zyonize1.com ---------------------------------------------- Server: 182.19.26.92 Address: 182.19.26.92#53 Name: zyonize1.com Address: 182.19.26.92 But here is the problem I am facing, I have windows machine, to test this dns and nameserver I set the first IPv4 DNS server to 182.19.26.92. Here is the details Connection-specific DNS Suffix: Description: Realtek PCIe GBE Family Controller Physical Address: ?14-FE-B5-9F-3A-A8 DHCP Enabled: No IPv4 Address: 192.168.2.50 IPv4 Subnet Mask: 255.255.255.0 IPv4 Default Gateway: 192.168.2.1 IPv4 DNS Servers: 182.19.26.92, 182.19.95.66 IPv4 WINS Server: NetBIOS over Tcpip Enabled: Yes Link-local IPv6 Address: fe80::45cc:2ada:c13:ca42%16 IPv6 Default Gateway: IPv6 DNS Server: when I am pining from this machine it is not finding the server. Where as in another server with another live IP with Fedora ping is working fine.

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • 12.04lts: no network internet

    - by dgermann
    Friends-- Cannot connect reliably to ethernet nor at all to Internet: Symptoms: About 2 weeks ago did an upgrade. Have not been able to connect to ethernet nor Internet. Today, for example, boot up this System76 laptop and there was no network connection. Did sudo mount -a and got some internal network connectivity: doug@ubuntu:/sam$ ping earth PING earth (192.168.0.201) 56(84) bytes of data. 64 bytes from earth (192.168.0.201): icmp_req=1 ttl=64 time=0.160 ms 64 bytes from earth (192.168.0.201): icmp_req=2 ttl=64 time=0.177 ms 64 bytes from earth (192.168.0.201): icmp_req=3 ttl=64 time=0.159 ms ^C --- earth ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.159/0.165/0.177/0.013 ms doug@ubuntu:/sam$ ping doug2 PING doug (192.168.0.4) 56(84) bytes of data. ^C --- doug ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 1999ms doug@ubuntu:/sam$ ping sharon PING sharon (192.168.0.111) 56(84) bytes of data. 64 bytes from sharon (192.168.0.111): icmp_req=1 ttl=128 time=0.276 ms ^C --- sharon ping statistics --- 6 packets transmitted, 1 received, 83% packet loss, time 5031ms rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms doug@ubuntu:/sam$ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. ^C --- 192.168.0.1 ping statistics --- 6 packets transmitted, 0 received, 100% packet loss, time 4999ms doug@ubuntu:/sam$ ping earth PING earth (192.168.0.201) 56(84) bytes of data. ^C --- earth ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4032ms doug@ubuntu:/sam$ ping yahoo.com ping: unknown host yahoo.com doug@ubuntu:/sam$ ping ubuntu.com ping: unknown host ubuntu.com doug@ubuntu:/sam$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 14 packets transmitted, 0 received, 100% packet loss, time 13103ms Note that earth is the cifs server, and one time pinging it worked, later failed. Clues: doug@ubuntu:/sam$ grep -i eth /var/log/syslog |tail Aug 23 15:32:46 ubuntu kernel: [ 5328.070401] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:32:48 ubuntu kernel: [ 5330.651139] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=19090 PROTO=2 Aug 23 15:34:51 ubuntu kernel: [ 5453.072279] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:34:55 ubuntu kernel: [ 5457.085433] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16137 PROTO=2 Aug 23 15:36:56 ubuntu kernel: [ 5578.074492] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:37:00 ubuntu kernel: [ 5582.359006] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16150 PROTO=2 Aug 23 15:39:01 ubuntu kernel: [ 5703.074410] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:39:03 ubuntu kernel: [ 5705.070122] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16163 PROTO=2 Aug 23 15:41:06 ubuntu kernel: [ 5828.074387] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Aug 23 15:41:13 ubuntu kernel: [ 5835.319941] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=23298 PROTO=2 doug@ubuntu:/sam$ ifconfig -a eth0 Link encap:Ethernet HWaddr [BLANKED] inet addr:192.168.0.7 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::21b:fcff:fe29:9dfc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3961 errors:0 dropped:0 overruns:0 frame:0 TX packets:2007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:991204 (991.2 KB) TX bytes:252908 (252.9 KB) Interrupt:16 Base address:0xec00 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2190 errors:0 dropped:0 overruns:0 frame:0 TX packets:2190 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:168052 (168.0 KB) TX bytes:168052 (168.0 KB) wlan0 Link encap:Ethernet HWaddr 00:19:d2:72:5a:0c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) doug@ubuntu:/sam$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. doug@ubuntu:/sam$ lsmod Module Size Used by des_generic 21191 0 md4 12523 0 nls_iso8859_1 12617 1 nls_cp437 12751 1 vfat 17308 1 fat 55605 1 vfat usb_storage 39646 1 dm_crypt 22528 1 joydev 17393 0 snd_hda_codec_analog 75395 1 snd_hda_intel 32719 2 pcmcia 39826 0 snd_hda_codec 109562 2 snd_hda_codec_analog,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec ip6t_LOG 16846 4 xt_hl 12465 6 ip6t_rt 12473 3 snd_pcm 80916 2 snd_hda_intel,snd_hda_codec nf_conntrack_ipv6 13581 7 nf_defrag_ipv6 13175 1 nf_conntrack_ipv6 ipt_REJECT 12512 1 ipt_LOG 12783 5 xt_limit 12541 12 xt_tcpudp 12531 21 xt_addrtype 12596 4 snd_seq_midi 13132 0 xt_state 12514 14 ip6table_filter 12711 1 ip6_tables 22528 3 ip6t_LOG,ip6t_rt,ip6table_filter nf_conntrack_netbios_ns 12585 0 nf_conntrack_broadcast 12541 1 nf_conntrack_netbios_ns nf_nat_ftp 12595 0 nf_nat 24959 1 nf_nat_ftp nf_conntrack_ipv4 19084 9 nf_nat nf_defrag_ipv4 12649 1 nf_conntrack_ipv4 nf_conntrack_ftp 13183 1 nf_nat_ftp nf_conntrack 73847 8 nf_conntrack_ipv6,xt_state,nf_conntrack_netbios_ns,nf_conntrack_broadcast,nf_nat_ftp,nf_nat,nf_conntrack_ipv4,nf_conntrack_ftp iptable_filter 12706 1 ip_tables 18106 1 iptable_filter snd_rawmidi 25424 1 snd_seq_midi psmouse 86982 0 x_tables 22011 13 ip6t_LOG,xt_hl,ip6t_rt,ipt_REJECT,ipt_LOG,xt_limit,xt_tcpudp,xt_addrtype,xt_state,ip6table_filter,ip6_tables,iptable_filter,ip_tables arc4 12473 2 r592 17808 0 snd_seq_midi_event 14475 1 snd_seq_midi memstick 15857 1 r592 yenta_socket 27465 0 serio_raw 13027 0 pcmcia_rsrc 18367 1 yenta_socket iwl3945 73186 0 pcmcia_core 21511 3 pcmcia,yenta_socket,pcmcia_rsrc iwl_legacy 71334 1 iwl3945 snd_seq 51592 2 snd_seq_midi,snd_seq_midi_event mac80211 436493 2 iwl3945,iwl_legacy snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq rfcomm 38139 0 bnep 17830 2 parport_pc 32114 0 bluetooth 158447 10 rfcomm,bnep ppdev 12849 0 cfg80211 178877 3 iwl3945,iwl_legacy,mac80211 asus_laptop 23693 0 sparse_keymap 13658 1 asus_laptop input_polldev 13648 1 asus_laptop nls_utf8 12493 6 cifs 258037 10 snd 62218 13 snd_hda_codec_analog,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd mac_hid 13077 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm lp 17455 0 parport 40930 3 parport_pc,ppdev,lp i915 428418 3 firewire_ohci 40172 0 sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci firewire_core 56940 1 firewire_ohci crc_itu_t 12627 1 firewire_core r8169 56396 0 drm_kms_helper 45466 1 i915 drm 197641 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19115 1 i915 doug@ubuntu:/sam$ dmesg |grep eth [ 0.116936] i2c-core: driver [aat2870] using legacy suspend method [ 0.116939] i2c-core: driver [aat2870] using legacy resume method [ 1.453811] r8169 0000:03:07.0: eth0: RTL8169sb/8110sb at 0xf840ec00, [BLANKED], XID 10000000 IRQ 16 [ 1.453815] r8169 0000:03:07.0: eth0: jumbo features [frames: 7152 bytes, tx checksumming: ok] [ 25.681231] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 154.037318] r8169 0000:03:07.0: eth0: link down [ 154.037329] r8169 0000:03:07.0: eth0: link down [ 154.037596] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 155.583162] r8169 0000:03:07.0: eth0: link up [ 155.583366] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 156.637048] r8169 0000:03:07.0: eth0: link down [ 156.637066] r8169 0000:03:07.0: eth0: link down [ 156.637339] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 156.773699] r8169 0000:03:07.0: eth0: link down [ 156.773983] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 158.456181] r8169 0000:03:07.0: eth0: link up [ 158.456378] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 159.364468] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 162.384496] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=38877 PROTO=2 [ 166.272457] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 166.422333] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=40695 PROTO=2 [ 168.736049] eth0: no IPv6 routers present [ 183.572472] r8169 0000:03:07.0: eth0: link down [ 183.572490] r8169 0000:03:07.0: eth0: link down [ 183.572934] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 185.204801] r8169 0000:03:07.0: eth0: link up [ 185.205005] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 3620.680451] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3621.068431] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3624.912973] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9118 PROTO=2 [ 3631.088069] eth0: no IPv6 routers present [ 3703.062980] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3703.465330] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9210 PROTO=2 [ 3828.062951] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3833.617772] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=9749 PROTO=2 [ 3953.062920] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 3955.675129] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15983 PROTO=2 [ 4078.062922] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4078.386319] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15997 PROTO=2 [ 4203.062899] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4203.559241] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16011 PROTO=2 [ 4328.062833] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4328.930922] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16027 PROTO=2 [ 4453.062811] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4453.950224] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16039 PROTO=2 [ 4578.062742] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4580.626432] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=13738 PROTO=2 [ 4703.062704] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4706.310170] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=15942 PROTO=2 [ 4828.062707] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4832.174324] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16505 PROTO=2 [ 4953.062628] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 4961.469282] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16090 PROTO=2 [ 5078.062552] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5080.776462] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=17239 PROTO=2 [ 5203.070394] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5205.358134] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=17665 PROTO=2 [ 5328.070401] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5330.651139] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=19090 PROTO=2 [ 5453.072279] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5457.085433] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16137 PROTO=2 [ 5578.074492] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5582.359006] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16150 PROTO=2 [ 5703.074410] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5705.070122] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED]--- SRC=192.168.0.10 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=16163 PROTO=2 [ 5828.074387] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5835.319941] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=23298 PROTO=2 [ 5953.074429] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.1 DST=224.0.0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [ 5961.925481] [UFW BLOCK] IN=eth0 OUT= MAC=[BLANKED][BLANKED]--- SRC=192.168.0.5 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=24261 PROTO=2 doug@ubuntu:/sam$ lspci -nnk |grep -iA2 eth 03:07.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller [10ec:8169] (rev 10) Subsystem: ASUSTeK Computer Inc. Device [1043:11e5] Kernel driver in use: r8169 doug@ubuntu:/sam$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 doug@ubuntu:/sam$ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Ifupdown (eth0)] ---------------------------------------------- Type: Wired Driver: r8169 State: connected Default: yes HW Address: [BLANKED] Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.7 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwl3945 State: disconnected Default: no HW Address: 00:19:D2:72:5A:0C Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points ATT592: Infra, 30:60:23:76:FE:60, Freq 2437 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 doug@ubuntu:/sam$ nslookup ubuntu.com ;; connection timed out; no servers could be reached doug@ubuntu:/sam$ dig ubuntuforums.org ; <<>> DiG 9.8.1-P1 <<>> ubuntuforums.org ;; global options: +cmd ;; connection timed out; no servers could be reached doug@ubuntu:/sam$ sudo ifconfig eth0 up doug@ubuntu:/sam$ dhcpcd eth0 The program 'dhcpcd' can be found in the following packages: * dhcpcd * dhcpcd5 Try: sudo apt-get install <selected package> doug@ubuntu:/sam$ lspci -k 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1252 Kernel driver in use: i915 Kernel modules: intelfb, i915 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) Subsystem: ASUSTeK Computer Inc. Device 1252 00:1b.0 Audio device: Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #1 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.1 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #2 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.2 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #3 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.3 USB controller: Intel Corporation NM10/ICH7 Family USB UHCI Controller #4 (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: uhci_hcd 00:1d.7 USB controller: Intel Corporation NM10/ICH7 Family USB2 EHCI Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel modules: leds-ss4200, iTCO_wdt, intel-rng 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: ata_piix 00:1f.3 SMBus: Intel Corporation NM10/ICH7 Family SMBus Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel modules: i2c-i801 02:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection (rev 02) Subsystem: Intel Corporation PRO/Wireless 3945ABG Network Connection Kernel driver in use: iwl3945 Kernel modules: iwl3945 03:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev b3) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: yenta_cardbus Kernel modules: yenta_socket 03:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C552 IEEE 1394 Controller (rev 08) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: firewire_ohci Kernel modules: firewire-ohci 03:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 17) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: sdhci-pci Kernel modules: sdhci-pci 03:01.3 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 08) Subsystem: ASUSTeK Computer Inc. Device 1297 Kernel driver in use: r592 Kernel modules: r592 03:07.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller (rev 10) Subsystem: ASUSTeK Computer Inc. Device 11e5 Kernel driver in use: r8169 Kernel modules: r8169 doug@ubuntu:/sam$ Things I have tried: sudo start network-manager: no help gksudo gedit /etc/network/interfaces changed line to iface eth0 inet dhcp: no help gksudo gedit /etc/NetworkManager/NetworkManager.conf, I changed managed=false to managed=true. Then sudo service network-manager restart: no help: network is unreachable sudo pkill -9 NetworkManager: no help gksudo gedit /etc/resolve.conf added line nameseriver 8.8.8.8: no help I know very little about networking; to date this has simply worked. Thanks for your help! :- Doug.

    Read the article

  • Oracle WebCenter Portal: Pagelet Producer – What’s New in 11.1.1.6.0 Release

    - by kellsey.ruppel
    Igor Plyakov, Sr. Principal Product Marketing Manager is back to share what's new in Oracle WebCenter Portal: Pagelet Producer. In February 2012 Oracle released 11g Release 1 (11.1.1.6.0) for WebCenter Portal. Pagelet Producer (aka Ensemble) that came out with this release added support for several new capabilities that are described in this post. As of 11.1.1.5.0 release the Pagelet Producer can expose WSRP and JPDK portlets as pagelets that can then be consumed in any portal or any third-party application that does not have a WSRP consumer. Now Pagelet Producer team is working on simplifying use of pagelets in WebCenter Sites. To expose WSRP portlets a new Producer should be registered with Pagelet Producer which can be done using Enterprise Manager, WLST or the Pagelet Producer Administration Console (for details see Section 25.9 of Administrator’s Guide for Oracle WebCenter Portal). If the producer requires authentication, Pagelet Producer allows you to select and use one of standard WSS token profiles.  After registration is finished a new resource is created and automatically populated with pagelets that represent the portlets associated with the WSRP endpoint.  For 11.1.1.6.0 release we completed extensive testing of consuming all WebCenter Services that are exposed as WSRP portlets by E2.0 Producer and delivery them as pagelets to WebCenter Interaction portal. In Pagelet Producer 11.1.1.6.0 release we added OpenSocial container that allows consuming gadgets from other OpenSocial containers, e.g. iGoogle, and expose them as pagelets. You can also use Pagelet Producer to host OpenSocial gadgets that could leverage OpenSocial APIs that it supports – People, Activities, Appdata and Pub-Sub features. Note that People and Activities expose the People Connections and Activity Stream from WebCenter Portal, i.e. to use these features Pagelet Producer requires connection to WebCenter Portal schema. Pub-Sub allows leveraging OpenAJAX Hub API for inter-gadget communication. In addition to these major new additions in Pagelet Producer 11.1.1.6.0 release we also extended several functional modules: The Clipping module was extended to support clipping of multiple regions on web resource page and then re-assembly of these separately clipped regions into a single pagelet. The auto-login feature can now be applied to web resources protected with Kerberos authentication; you would find this new functionality handy for consuming SharePoint web parts The logging module now supports full HTTP traffic between the Pagelet Producer and proxied web resource. At last, as the rest of WebCenter Portal stack the Pagelet Producer 11.1.1.6.0 can run on IBM WebSphere Application Server.

    Read the article

  • Slides and Code from my Silverlight MVVM Talk at DevConnections

    - by dwahlin
    I had a great time at the DevConnections conference in Las Vegas this year where Visual Studio 2010 and Silverlight 4 were launched. While at the conference I had the opportunity to give a full-day Silverlight workshop as well as 4 different talks and met a lot of people developing applications in Silverlight. I also had a chance to appear on a live broadcast of Channel 9 with John Papa, Ward Bell and Shawn Wildermuth, record a video with Rick Strahl covering jQuery versus Silverlight and record a few podcasts on Silverlight and ASP.NET MVC 2.  It was a really busy 4 days but I had a lot of fun chatting with people and hearing about different business problems they were solving with ASP.NET and/or Silverlight. Thanks to everyone who attended my sessions and took the time to ask questions and stop by to talk one-on-one. One of the talks I gave covered the Model-View-ViewModel pattern and how it can be used to build architecturally sound applications. Topics covered in the talk included: Understanding the MVVM pattern Benefits of the MVVM pattern Creating a ViewModel class Implementing INotifyPropertyChanged in a ViewModelBase class Binding a ViewModel declaratively in XAML Binding a ViewModel with code ICommand and ButtonBase commanding support in Silverlight 4 Using InvokeCommandBehavior to handle additional commanding needs Working with ViewModels and Sample Data in Blend Messaging support with EventBus classes, EventAggregator and Messenger My personal take on code in a code-beside file (I’m all in favor of it when used appropriately for message boxes, child windows, animations, etc.) One of the samples I showed in the talk was intended to teach all of the concepts mentioned above while keeping things as simple as possible.  The sample demonstrates quite a few things you can do with Silverlight and the MVVM pattern so check it out and feel free to leave feedback about things you like, things you’d do differently or anything else. MVVM is simply a pattern, not a way of life so there are many different ways to implement it. If you’re new to the subject of MVVM check out the following resources. I wish this talk would’ve been recorded (especially since my live and canned demos all worked :-)) but these resources will help get you going quickly. Getting Started with the MVVM Pattern in Silverlight Applications Model-View-ViewModel (MVVM) Explained Laurent Bugnion’s Excellent Talk at MIX10     Download sample code and slides from my DevConnections talk     For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • Cost justification for buying a 32GB superfast Alienware M18x with a price tag of around £5K ($10K)

    - by tonyrogerson
    When considering buying a laptop that’s going to cost me around £5,000 I really need to justify the purchase from a business perspective; my Lenovo W700 has served me very well for the last 2 years, it’s an extremely good machine and as solid as a rock (and as heavy), alas though it is limited to the 8GB. As SQL Server 2012 approaches and with my interest in working in the Business Intelligence space over the next year or two it is clear I need a powerful machine that I can run a full infrastructure though virtualised. My requirements For High Availability / Disaster Recovery research and demonstration Machine for a domain controller Four machines in a shared disk cluster (SQL Server Clustering active – active etc.) Five  machines in a file share cluster (SQL Server Availability Groups) For Business Intelligence research and demonstration Not entirely sure how many machine I want to run here, but it would be to cover the entire BI stack in an enterprise setting, sharepoint, sql server etc. For Big Data Research I have a fondness for the NoSQL approach to scalability and dealing with large volumes so I need a number of machines to research VoltDB, Hadoop etc. As you can see the requirements for a SQL Server consultant to service their clients well is considerable; will 8GB suffice, alas no, it will no longer do. I’m a very strong believer that in order to do your job well you must expense it, short cuts only cost you time, waiting 5 minutes instead of an hour for something to run not only saves me time but my clients time, I can do things quicker and more importantly I can demonstrate concepts. My W700 with the 8GB of RAM and SSD’s cost me around £3.5K two years ago, to be honest I’ve not got the full use I wanted out of it but the machine has had the power when I’ve needed it, it’s served me and my clients well. Alienware now do a model (the M18x) with 32GB of RAM; yes 32GB in a laptop! Dual drives so I can whack a couple of really good SSD’s in there, a quad core with hyper threading i7 and a decent speed. I can reduce the cost of the memory by getting it from Crucial, so instead of £1.5K for 32GB it will be around £900, I can also cost save on the SSD as well. The beauty about the M18x is that it is USB3.0, SATA 3 and also really importantly has eSATA, running VM’s will never be easier, I can have a removeable SSD with my VM’s on it and can plug it into my home machine or laptop – an ideal world! The initial outlay of £5K is peanuts compared to the benefits I’ll give my clients, I will be able to present real enterprise concepts, I’ll also be able to give training on those real enterprise concepts and with real, albeit virtualised machines.

    Read the article

  • ASP.NET Asynchronous Pages and when to use them

    - by rajbk
    There have been several articles posted about using  asynchronous pages in ASP.NET but none of them go into detail as to when you should use them. I finally found a great post by Thomas Marquardt that explains the process in depth. He addresses a key misconception also: So, in your ASP.NET application, when should you perform work asynchronously instead of synchronously? Well, only 1 thread per CPU can execute at a time.  Did you catch that?  A lot of people seem to miss this point...only one thread executes at a time on a CPU. When you have more than this, you pay an expensive penalty--a context switch. However, if a thread is blocked waiting on work...then it makes sense to switch to another thread, one that can execute now.  It also makes sense to switch threads if you want work to be done in parallel as opposed to in series, but up until a certain point it actually makes much more sense to execute work in series, again, because of the expensive context switch. Pop quiz: If you have a thread that is doing a lot of computational work and using the CPU heavily, and this takes a while, should you switch to another thread? No! The current thread is efficiently using the CPU, so switching will only incur the cost of a context switch. Ok, well, what if you have a thread that makes an HTTP or SOAP request to another server and takes a long time, should you switch threads? Yes! You can perform the HTTP or SOAP request asynchronously, so that once the "send" has occurred, you can unwind the current thread and not use any threads until there is an I/O completion for the "receive". Between the "send" and the "receive", the remote server is busy, so locally you don't need to be blocking on a thread, but instead make use of the asynchronous APIs provided in .NET Framework so that you can unwind and be notified upon completion. Again, it only makes sense to switch threads if the benefit from doing so out weights the cost of the switch. Read more about it in these posts: Performing Asynchronous Work, or Tasks, in ASP.NET Applications http://blogs.msdn.com/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx ASP.NET Thread Usage on IIS 7.0 and 6.0 http://blogs.msdn.com/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx   PS: I generally do not write posts that simply link to other posts but think it is warranted in this case.

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Updated 9th May 2010 – I have now presented at both of these sessions  and posted about it. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • Not Dead, Just Busy

    - by MOSSLover
    So I didn’t die in a freak smelting accident yet, but I have been dealing with a lot of different things.  I had to take a bit of a break to deal with the cat death issue.  I am not fully recovered, because well it just happened a few months ago.  It kind of sucked.  Plus the apartment feels a lot bigger. Then you have the whole New York Comic Con thing where I had to plan some cosplay costumes.  I have been trying to find time to hang out with friends and have a social life.  That plus I built an entire presentation for iOS development for New York Code Camp.  I am also planning a couple MS Community dinners (namely one a week from Tuesday) plus a give camp.  I am also planning a vacation around SPS UK plus I will be at SPC.  Life is just incredibly hectic and when you factor in dating to the mix it’s gotten insane to the point where some day I just have to go dark.  Hence the lack of blogging.  I am just trying to keep up with everything and everyone without losing myself. If you guys will be at SPC or SPS UK I will be at both places this year.  Stop by the Planet Technologies booth and see me or I’ll be around somewhere.  I am really sorry if I don’t remember you from an event or if you are someone following me on twitter.  I am trying to get better at the mnemonic memory devices, but I think things broke down around the 47th event I attended or spoke at or something to that nature.  If anyone wants to talk to Cathy, Lori, or I about Women in SharePoint definitely find us at the event.  Anyway good night and good luck guys.  I promise to check back at least once before the year ends.  In the meantime twitter stalking is always possible.  Sometimes I even respond back. Technorati Tags: SPC,SPS UK,NYCC,NYC Code Camp,MOSSLover

    Read the article

  • FREE Windows Azure evening in London on April 15th including FREE access to Windows Azure

    - by Eric Nelson
    [Did I overdo the use of FREE in the title? :-)] April 12th to 16th is Microsoft Tech Days – 5 days of sessions on Visual Studio 2010 through to Windows 7 Phone Series. Many of these days are now full (Tip - Thursday still has room if rich client applications is your thing) but the good news is the development community in the UK has pulled together an awesome series of “fringe events” during April in London and elsewhere in the UK. There are sessions on Silverlight, SQL Server 2008 R2, Sharepoint 2010 and … the Windows Azure Platform. The UK AzureNET user group is planning to put on a great evening and AzureNET will be giving away hundreds of free subscriptions to the Windows Azure Platform during the evening. The subscription includes up to 20 Windows Azure Compute nodes and 3 SQL Azure databases for you to play with over the 2 weeks following the event. This is a great opportunity to really explore the Windows Azure Platform in detail – without a credit card! Register now! (and you might also want to join the UK Fans of Azure Community while I have your attention) FYI The Thursday day time event includes an introduction to Windows Azure session delivered by my colleague David – which would be an ideal session to attend if you are new to Azure and want to get the most out of the evening session. 7:00pm: See the difference: How Windows Azure helped build a new way of giving Simon Evans and James Broome (@broomej) They will cover the business context for Azure and then go into patterns used and lessons learnt from the project....as well as showing off the app of course! 8:00pm: UK AzureNET update 8:15pm: NoSQL databases or: How I learned to love the hash table Mark Rendle (@markrendle) In this session Mark will look at how Azure Table Service works and how to use it. We’ll look briefly at the high-level Data Services SDK, talk about its limitations, and then quickly move on to the REST API and how to use it to improve performance and reduce costs. We’ll make-up some pretend real-world problems and solve them in new and interesting ways. We’ll denormalise data (for fun and profit). We’ll talk about how certain social networking sites can deal with huge volumes of data so quickly, and why it sometimes goes wrong. Check out the complete list of fringe events which covers the UK fairly well:

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • Text Trimming in Silverlight 4

    - by dwahlin
    Silverlight 4 has a lot of great features that can be used to build consumer and Line of Business (LOB) applications. Although Webcam support, RichTextBox, MEF, WebBrowser and other new features are pretty exciting, I’m actually enjoying some of the more simple features that have been added such as text trimming, built-in wheel scrolling with ScrollViewer and data binding enhancements such as StringFormat. In this post I’ll give a quick introduction to a simple yet productive feature called text trimming and show how it eliminates a lot of code compared to Silverlight 3. The TextBlock control contains a new property in Silverlight 4 called TextTrimming that can be used to add an ellipsis (…) to text that doesn’t fit into a specific area on the user interface. Before the TextTrimming property was available I used a value converter to trim text which meant passing in a specific number of characters that I wanted to show by using a parameter: public class StringTruncateConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { int maxLength; if (int.TryParse(parameter.ToString(), out maxLength)) { string val = (value == null) ? null : value.ToString(); if (val != null && val.Length > maxLength) { return val.Substring(0, maxLength) + ".."; } } return value; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } To use the StringTruncateConverter I'd define the standard xmlns prefix that referenced the namespace and assembly, add the class into the application’s Resources section and then use the class while data binding as shown next: <TextBlock Grid.Column="1" Grid.Row="3" ToolTipService.ToolTip="{Binding ReportSummary.ProjectManagers}" Text="{Binding ReportSummary.ProjectManagers, Converter={StaticResource StringTruncateConverter},ConverterParameter=16}" Style="{StaticResource SummaryValueStyle}" /> With Silverlight 4 I can define the TextTrimming property directly in XAML or use the new Property window in Visual Studio 2010 to set it to a value of WordEllipsis (the default value is None): <TextBlock Grid.Column="1" Grid.Row="4" ToolTipService.ToolTip="{Binding ReportSummary.ProjectCoordinators}" Text="{Binding ReportSummary.ProjectCoordinators}" TextTrimming="WordEllipsis" Style="{StaticResource SummaryValueStyle}"/> The end result is a nice trimming of the text that doesn’t fit into the target area as shown with the Coordinator and Foremen sections below. My data binding statements are now much smaller and I can eliminate the StringTruncateConverter class completely.   For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • This Task Is Currently Locked by a Running Workflow and Cannot Be Edited

    - by Jayant Sharma
    Problem: In SharePoint Workflow, "This task is currently locked by a running workflow and cannot be edited" is the common exception, that we face. Solution: Generally this exception occurs 1.  when the number of items in the Task List gets highThis exception says that the workflow is not able to deliver the all the events at a given time and so the tasks get locked.  Out Of Box, the default event delivery throttle value is 15.  Event delivery throttle value Specifies the number of workflows that can be processed at the same time across all front-end Web serverslook at following link.(http://blogs.msdn.com/b/vincent_runge/archive/2008/09/16/about-the-workflow-eventdelivery-throttle-parameter.aspx)If the value returned by query is superior to the throttle (15 by default), any new workflow event will not be processed immediately. so we need to change it by stsadm command like...stsadm -o setproperty -pn workflow-eventdelivery-throttle -pv "20"(http://technet.microsoft.com/en-us/library/cc287939(office.12).aspx) 2. When we modify a Workflow Task from Custom TaskEdit Page.   when we try to modify the workflow task from outside workflow default Page, like custom workflow taskedit page. then is exception occurs.suppose we have custom task edit page with dropdown  and values are submitted/ Progress/ completed etc and we want to complete task from here. it will throw exception on SPWorkflowTask.AlterTask method, which changes the TaskStatus.When I debug, to find the root cause I actully found that the workflow is not locked. The InternalState flag of the workflow does not include the Locked flag bits(http://msdn.microsoft.com/en-us/library/dd928318(v=office.12).aspx) When I found this link http://geek.hubkey.com/2007/09/locked-workflow.htmlIt is exactly what I wanted. It says that "when the WorkflowVersion of the task list item is not equal to 1" then the error occurs. The solution that is propsed here works fantastically if ((int)task[SPBuiltInFieldId.WorkflowVersion] != 1){    SPList parentList = task.ParentList.ParentWeb.Lists[new Guid(task[SPBuiltInFieldId.WorkflowListId].ToString())];    SPListItem parentItem = parentList.Items.GetItemById((int)task[SPBuiltInFieldId.WorkflowItemId]);    SPWorkflow workflow = parentItem.Workflows[new Guid(task[SPBuiltInFieldId.WorkflowInstanceID].ToString())];    if (!workflow.IsLocked)    {       task[SPBuiltInFieldId.WorkflowVersion] = 1;       task.SystemUpdate();      break;    }} It will reset the workflow version to 1 again.Conclusion: This Exception is completely confusing. So, we need to find at first whether our workflow is really locked or not. If it is really locked then use 1st method. If not, then check the workflow version and set it to 1 again.Jayant Sharma

    Read the article

  • Making the WPeFfort

    - by Laila
    Microsoft Visual Studio 2010 will be launched on April 12th. The basic layout looks pretty much as it did, so it is not immediately obvious on first inspection that it was completely rewritten in the Windows Presentation Foundation (WPF). The current VS 2008 codebase had reached the end of its life; It was getting slow to initialize and sluggish to run, and was never going to allow for multi-monitor support or easier extensibility. It can't have been an easy decision to rewrite Visual Studio, but the gamble seems to have paid off. Although certain bugs in the betas caused some anxiety about performance, these seem to have been fixed, and the new Visual Studio is definitely faster. In rewriting the codebase, it has been possible to make obvious improvements, such as being able to run different windows on different monitors, and you only being presented with the Toolbox controls and References that are appropriate to your target .NET version. There is also an IntelliTrace debugger, and Intellisense has been improved by virtue of separating a 'Suggestion Mode' and 'Completion Mode' (with its 'Generate From.' 'Highlight References.', and 'Navigate to...' features). At the same time, there has been quite a clearout; Certain features that had been tucked away in the previous versions, such as Brief or Emacs emulation support, have been dropped. (Yes, they were being used!) There are a lot of features that didn't require the rewrite, but are welcome. It is now easier to develop WPF applications (e.g. drag-and-drop Databinding), and there is support for Azure. There are more, and better templates and the design tools are greatly improved (e.g. Expression Web, Expression Blend, WPF Sketchflow, Silverlight designer, Document Map Margin and Inline Call Hierarchy). Sharepoint is better supported, and Office apps will benefit from C#'s support of optional and named arguments, and allowing several Office Solutions within a Deployment package. Most importantly, it is a vote of confidence in the WPF. VS 2010 is the essential missing component that has been impeding the faster adoption of WPF. The fact that it is actually now written in WPF should now reassure the doubters, and convince more developers to make the move from WinForms to WPF. In using WPF, the developers of Visual Studio have had the clout to fix some issues which have been bothering WPF developers for some time (such as blurred text). Do you see a brighter future as a result of transferring from WinForms to WPF? I'd love to know what you think. Cheers, Laila

    Read the article

  • Connect to QuickBooks from PowerBuilder using RSSBus ADO.NET Data Provider

    - by dataintegration
    The RSSBus ADO.NET providers are easy-to-use, standards based controls that can be used from any platform or development technology that supports Microsoft .NET, including Sybase PowerBuilder. In this article we show how to use the RSSBus ADO.NET Provider for QuickBooks in PowerBuilder. A similar approach can be used from PowerBuilder with other RSSBus ADO.NET Data Providers to access data from Salesforce, SharePoint, Dynamics CRM, Google, OData, etc. In this article we will show how to create a basic PowerBuilder application that performs CRUD operations using the RSSBus ADO.NET Provider for QuickBooks. Step 1: Open PowerBuilder and create a new WPF Window Application solution. Step 2: Add all the Visual Controls needed for the connection properties. Step 3: Add the DataGrid control from the .NET controls. Step 4:Configure the columns of the DataGrid control as shown below. The column bindings will depend on the table. <DataGrid AutoGenerateColumns="False" Margin="13,249,12,14" Name="datagrid1" TabIndex="70" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn x:Name="idColumn" Binding="{Binding Path=ID}" Header="ID" Width="SizeToHeader" /> <DataGridTextColumn x:Name="nameColumn" Binding="{Binding Path=Name}" Header="Name" Width="SizeToHeader" /> ... </DataGrid.Columns> </DataGrid> Step 5:Add a reference to the RSSBus ADO.NET Provider for QuickBooks assembly. Step 6:Optional: Set the QBXML Version to 6. Some of the tables in QuickBooks require a later version of QuickBooks to support updates and deletes. Please check the help for details. Connect the DataGrid: Once the visual elements have been configured, developers can use standard ADO.NET objects like Connection, Command, and DataAdapter to populate a DataTable with the results of a SQL query: System.Data.RSSBus.QuickBooks.QuickBooksConnection conn conn = create System.Data.RSSBus.QuickBooks.QuickBooksConnection(connectionString) System.Data.RSSBus.QuickBooks.QuickBooksCommand comm comm = create System.Data.RSSBus.QuickBooks.QuickBooksCommand(command, conn) System.Data.DataTable table table = create System.Data.DataTable System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter dataAdapter dataAdapter = create System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter(comm) dataAdapter.Fill(table) datagrid1.ItemsSource=table.DefaultView The code above can be used to bind data from any query (set this in command), to the DataGrid. The DataGrid should have the same columns as those returned from the SELECT statement. PowerBuilder Sample Project The included sample project includes the steps outlined in this article. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Oracle WebCenter: Composite Applications & Mash-Ups

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how with the new release of Oracle WebCenter you can create composite applications and mashups.We recently talked with Sachin Agarwal, Director of Product Management of Enterprise 2.0 at Oracle around the topic of Composite Applications and Mashups. Oracle WebCenter provides a rich set of tools and capabilities for pulling in content, applications and collaboration functionality from various different sources and weaving them together into what we call Mashups. Mashups that also consists of transactional applications from multiple sources are specifically called Composite Applications. With the latest release of Oracle WebCenter one can develop highly productive tasked based interfaces that aggregate a related set of applications that are part of a business process and provide in context collaboration tools so that users don’t have to navigate away to different tabs to achieve these tasks. For instance, a call center representative (CSR), not only needs to be able to pull customer information from a CRM application like Siebel, but also related information from Oracle E-Business Suite about whether a specific order has shipped. The CSR will be far more efficient if he or she does not have to open different tabs to login into multiple applications while the customer is waiting, but can access all this information in one mashup.Oracle WebCenter Suite provides a comprehensive set of tooling that enables a business user to quickly aggregate together a mashup and wire-in different backend applications to create a custom dashboard. Not only does Oracle WebCenter supports a wide set of standards (WSRP 1.0, 2.0, JSR 168, JSR 286) that allow portlets  from other applications to be surfaced within WebCenter, but it also provides tools to bring in other web applications such as .Net Applications  as well as SharePoint webparts. The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups.  Moreover, Business users can customize or personalize any page using Oracle WebCenter Composer’s on-the-fly visual page editing features. Users access and select different resource components available in Oracle WebCenter’s Business Dictionary in order to add new content to the page. The Business Dictionary provides a role-based view of available components or resources, and these components can include information from a variety of enterprise resources such as enterprise applications, managed content, rich media, business processes, or business intelligence systems. Together, Oracle WebCenter’s Composer and Business Dictionary give users access to a powerful, yet easy to use, set of tools to personalize and extend their Oracle WebCenter portals and applications without involving IT.Keep checking back this week as we share more information on how you can easily create Commposite Applications and Mashups with Oracle WebCenter .Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, applications, mashups, composite applications

    Read the article

  • Oracle WebCenter: Composite Applications & Mash-Ups

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how with the new release of Oracle WebCenter you can create composite applications and mashups.We recently talked with Sachin Agarwal, Director of Product Management of Enterprise 2.0 at Oracle around the topic of Composite Applications and Mashups. Oracle WebCenter provides a rich set of tools and capabilities for pulling in content, applications and collaboration functionality from various different sources and weaving them together into what we call Mashups. Mashups that also consists of transactional applications from multiple sources are specifically called Composite Applications. With the latest release of Oracle WebCenter one can develop highly productive tasked based interfaces that aggregate a related set of applications that are part of a business process and provide in context collaboration tools so that users don’t have to navigate away to different tabs to achieve these tasks. For instance, a call center representative (CSR), not only needs to be able to pull customer information from a CRM application like Siebel, but also related information from Oracle E-Business Suite about whether a specific order has shipped. The CSR will be far more efficient if he or she does not have to open different tabs to login into multiple applications while the customer is waiting, but can access all this information in one mashup.Oracle WebCenter Suite provides a comprehensive set of tooling that enables a business user to quickly aggregate together a mashup and wire-in different backend applications to create a custom dashboard. Not only does Oracle WebCenter supports a wide set of standards (WSRP 1.0, 2.0, JSR 168, JSR 286) that allow portlets  from other applications to be surfaced within WebCenter, but it also provides tools to bring in other web applications such as .Net Applications  as well as SharePoint webparts. The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups.  Moreover, Business users can customize or personalize any page using Oracle WebCenter Composer’s on-the-fly visual page editing features. Users access and select different resource components available in Oracle WebCenter’s Business Dictionary in order to add new content to the page. The Business Dictionary provides a role-based view of available components or resources, and these components can include information from a variety of enterprise resources such as enterprise applications, managed content, rich media, business processes, or business intelligence systems. Together, Oracle WebCenter’s Composer and Business Dictionary give users access to a powerful, yet easy to use, set of tools to personalize and extend their Oracle WebCenter portals and applications without involving IT.Keep checking back this week as we share more information on how you can easily create Commposite Applications and Mashups with Oracle WebCenter .Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, applications, mashups, composite applications

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >