Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 430/448 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Replace JBoss error page with Axis2 fault XML response

    - by Dario
    I'm developing a webservice with Axis2 1.4.1 on JBoss 4.2.3/Tomcat 5.5.27 and Java 1.5.0 (15-b04). It works flawlessly but when an exception happens I get a JBoss error 500 HTML page instead of an Axis2 XML/SOAP fault. This behavoir is vexing, because it difficults to handle errors in the webservice client or in SoapUI while developing. Can I change this to get the SOAP fault? Maybe it's just an Axis2 or JBoss parameter, but I didn't find any clue about. EDIT: Here goes the new stacktrace: [ERROR] WSDoAllReceiver: security processing failed org.apache.axis2.AxisFault: WSDoAllReceiver: security processing failed at org.apache.rampart.handler.WSDoAllReceiver.processBasic(WSDoAllReceiver.java:214) at org.apache.rampart.handler.WSDoAllReceiver.processMessage(WSDoAllReceiver.java:86) at org.apache.rampart.handler.WSDoAllHandler.invoke(WSDoAllHandler.java:72) at org.apache.axis2.engine.Phase.invoke(Phase.java:317) at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:264) at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:163) at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:275) at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:133) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:875) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:595) Caused by: org.apache.ws.security.WSSecurityException: The security token could not be authenticated or authorized at org.apache.ws.security.processor.UsernameTokenProcessor.handleUsernameToken(UsernameTokenProcessor.java:155) at org.apache.ws.security.processor.UsernameTokenProcessor.handleToken(UsernameTokenProcessor.java:53) at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:311) at org.apache.ws.security.WSSecurityEngine.processSecurityHeader(WSSecurityEngine.java:228) at org.apache.rampart.handler.WSDoAllReceiver.processBasic(WSDoAllReceiver.java:211) ... 23 more [ERROR] Servlet.service() para servlet AxisServlet lanzó excepción java.lang.NullPointerException at org.apache.rampart.RampartMessageData.<init>(RampartMessageData.java:308) at org.apache.rampart.MessageBuilder.build(MessageBuilder.java:61) at org.apache.rampart.handler.RampartSender.invoke(RampartSender.java:64) at org.apache.axis2.engine.Phase.invoke(Phase.java:317) at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:264) at org.apache.axis2.engine.AxisEngine.sendFault(AxisEngine.java:520) at org.apache.axis2.transport.http.AxisServlet.handleFault(AxisServlet.java:416) at org.apache.axis2.transport.http.AxisServlet.processAxisFault(AxisServlet.java:379) at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:167) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:875) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Thread.java:595) EDIT 2: After giving the bounty I found that I was wrong about 1.2.9-SNAPSHOT version of Axiom. I built it again, made sure the jars where correctly copied to lib directory and it worked! Finally, it was an Axiom bug, as said in the links provided by Vineet. Thanks!

    Read the article

  • LDAP query on linux against AD returns groups with no members

    - by SethG
    I am using LDAP+kerberos to authenticate against Active Directory on Windows 2003 R2. My krb5.conf and ldap.conf appear to be correct (according to pretty much every sample I found on the 'net). I can login to the host with both password and ssh keys. When I run getent passwd, all my ldap user accounts are listed with all the important attributes. When I run getent group, all the ldap groups and their gid's are listed, but no group members. If I run ldapsearch and filter on any group, the members are all listed with the "member" attribute. So the data is there for the taking, it's just not being parsed properly. It would appear that I simply am using an incorrect mapping in ldap.conf, but I can't see it. I've tried several variations and all give the same result. Here is my current ldap.conf: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy hard nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=User pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad Here's the kicker: this config works 100% fine on a different linux box with a different distro. It does not work on the distro I am planning on switching to. I have installed from source the versions of pam_ldap and nss_ldap on the new box to match the old box, which fixed another problem I was having with this setup. Other relevant info is the original AD box was Windows 2003. It's mirror died a horrible hardware death so I'm trying to add two more 2003-R2 servers to the mirror tree and ultimately drop the old 2003 box. The new R2 boxes appear to have joined the DC forest properly. What do I need to do to get groups working? I've exhausted all the resources I could find and need a different angle. Any input is appreciated. Status update, 7/31/09 I have managed to tweak my config file to get full info from the AD and performance is nice and snappy. I replaced the back-rev'd copies of pam_ldap and nss_ldap with the current ones for the distro I'm using, so it's back to a standard out-of-the-box install. Here's my current config: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy soft nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_connect_policy oneshot referrals no nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=CN=Person,CN=Schema,CN=Configuration,DC=w2k,DC=cis,DC=ksu,DC=edu pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad ssl off tls_checkpeer no sasl_secprops maxssf=0 The remaining problem now is when you run the groups command, not all subscribed groups are listed. Some are (one or two), but not all. Group memberships are still honored, such as file and printer access. getent group foo still shows that the user is a member of group foo. So it appears to be a presentation bug, and does not interfere with normal operation. It also appears that some (I have not determined exactly how many) group searches do not resolve correctly, even though the group is listed. eg, when you run "getent group bar", nothing is returned, but if you run "getent group|grep bar" or "getent group|grep <bar_gid>" you can see that it indeed listed and your group name and gid are correct. This still seems like an LDAP search or mapping error, but I can't figure out what it is. I'm a heckuva lot closer than earlier in the week, but I'd really like to get this last detail ironed out.

    Read the article

  • Ivy resolve not working with dynamic artifact

    - by richever
    I've been using Ivy a bit but I seem to still have a lot to learn. I have two projects. One is a web app and the other is a library upon which the web app depends. The set up is that the library project is compiled to a jar file and published using Ivy to a directory within the project. In the web app build file, I have an ant target that calls the Ivy resolve ant task. What I'd like to do is have the web app using the dynamic resolve mode during development (on developer's local machines) and default resolve mode for test and production builds. Previously I was appending a time stamp to the library archive file so that Ivy would notice changes in file when the web app tried to resolve its dependency on it. Within Eclipse this is cumbersome because, in the web app, the project had to be refreshed and the build path tweaked every time a new library jar was published. Publishing a similarly named jar file every time would, I figure, only require developers to refresh the project. The problem is that the web app is unable to retrieve the dynamic jar file. The output I get looks something like this: resolve: [ivy:configure] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ :: [ivy:configure] :: loading settings :: file = /Users/richard/workspace/webapp/web/WEB-INF/config/ivy/ivysettings.xml [ivy:resolve] :: resolving dependencies :: com.webapp#webapp;[email protected] [ivy:resolve] confs: [default] [ivy:resolve] found com.webapp#library;latest.integration in local [ivy:resolve] :: resolution report :: resolve 142ms :: artifacts dl 0ms --------------------------------------------------------------------- | | modules || artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| --------------------------------------------------------------------- | default | 1 | 0 | 0 | 0 || 0 | 0 | --------------------------------------------------------------------- [ivy:resolve] [ivy:resolve] :: problems summary :: [ivy:resolve] :::: WARNINGS [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: UNRESOLVED DEPENDENCIES :: [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :: com.webapp#library;latest.integration: impossible to resolve dynamic revision [ivy:resolve] :::::::::::::::::::::::::::::::::::::::::::::: [ivy:resolve] :::: ERRORS [ivy:resolve] impossible to resolve dynamic revision for com.webapp#library;latest.integration: check your configuration and make sure revision is part of your pattern [ivy:resolve] [ivy:resolve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED /Users/richard/workspace/webapp/build.xml:71: impossible to resolve dependencies: resolve failed - see output for details The web app resolve target looks like this: <target name="resolve" depends="load-ivy"> <ivy:configure file="${ivy.dir}/ivysettings.xml" /> <ivy:resolve file="${ivy.dir}/ivy.xml" resolveMode="${ivy.resolve.mode}"/> <ivy:retrieve pattern="${lib.dir}/[artifact]-[revision].[ext]" type="jar" sync="true" /> </target> In this case, ivy.resolve.mode has a value of 'dynamic' (without quotes). The web app's Ivy file is simple. It looks like this: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.webapp" module="webapp"/> <dependencies> <dependency name="library" rev="${ivy.revision.default}" revConstraint="${ivy.revision.dynamic}" /> </dependencies> </ivy-module> During development, ivy.revision.dynamic has a value of 'latest.integration'. While, during production or test, 'ivy.revision.default' has a value of '1.0'. Any ideas? Please let me know if there's any more information I need to supply. Thanks!

    Read the article

  • wpf window refresh works at first, then stops

    - by mcl
    I've got a routine that grabs a list of all images in a directory, then runs an MD5 digest on all of them. Since this takes a while to do, I pop up a window with a progress bar. The progress bar is updated by a lambda that I pass in to the long-running routine. The first problem was that the progress window was never updated (which is normal in WPF I guess). Since WPF lacks a Refresh() command I fixed this with a call to Dispatcher.Invoke(). Now the progress bar is updated for a while, then the window stops being updated. The long-running work does eventually finish and the windows go back to normal. I have already tried a BackgroundWorker and quickly became frustrated by a threading issue related to an event triggered by the long-running process. So if that's really the best solution and I just need to learn the paradigm better, please say so. But I'd be really much happier with the approach I've got here, except that it stops updating after a bit (for example, in a folder with 1000 files, it might update for 50-100 files, then "hang"). The UI does not need to be responsive during this activity, except to report on progress. Anyway, here's the code. First the progress window itself: public partial class ProgressWindow : Window { public ProgressWindow(string title, string supertext, string subtext) { InitializeComponent(); this.Title = title; this.SuperText.Text = supertext; this.SubText.Text = subtext; } internal void UpdateProgress(int count, int total) { this.ProgressBar.Maximum = Convert.ToDouble(total); this.ProgressBar.Value = Convert.ToDouble(count); this.SubText.Text = String.Format("{0} of {1} finished", count, total); this.Dispatcher.Invoke(DispatcherPriority.Render, EmptyDelegate); } private static Action EmptyDelegate = delegate() { }; } <Window x:Class="Pixort.ProgressWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Pixort Progress" Height="128" Width="256" WindowStartupLocation="CenterOwner" WindowStyle="SingleBorderWindow" ResizeMode="NoResize"> <DockPanel> <TextBlock DockPanel.Dock="Top" x:Name="SuperText" TextAlignment="Left" Padding="6"></TextBlock> <TextBlock DockPanel.Dock="Bottom" x:Name="SubText" TextAlignment="Right" Padding="6"></TextBlock> <ProgressBar x:Name="ProgressBar" Height="24" Margin="6"/> </DockPanel> </Window> The long running method (in Gallery.cs): public void ImportFolder(string folderPath, Action<int, int> progressUpdate) { string[] files = this.FileIO.GetFiles(folderPath); for (int i = 0; i < files.Length; i++) { // do stuff with the file if (null != progressUpdate) { progressUpdate.Invoke(i + 1, files.Length); } } } Which is called thusly: ProgressWindow progress = new ProgressWindow("Import Folder Progress", String.Format("Importing {0}", folder), String.Empty); progress.Show(); this.Gallery.ImportFolder(folder, ((c, t) => progress.UpdateProgress(c, t))); progress.Close();

    Read the article

  • Configure PERL DBI and DBD in Linux

    - by Balualways
    I am new to Perl and I work in a Linux OEL 5x server. I am trying to configure the Perl DB modules for Oracle connectivity (DBD and DBI modules). Can anyone help me out in the installation procedure? I had tried CPAN didn't really worked out. Any help would be appreciated. I am not quite sure I need to initialize any variables other than $LD_LIBRARY_PATH and $ORACLE_HOME These are my observations: ISSUE:: I am getting the following issue while using the DBI module to connect to Oracle: install_driver(Oracle) failed: Can't locate loadable object for module DBD::Oracle in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a module that DBD::Oracle requires hasn't been fully installed at connectdb.pl line 57 I had installed the DBD for oracle from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD/DBD-Oracle-1.50 Could you please take a look into the steps and correct me if I am wrong: Observations: $ echo $LD_LIBRARY_PATH /opt/CA/UnicenterAutoSysJM/autosys/lib:/opt/CA/SharedComponents/Csam/SockAdapter/lib:/opt/CA/SharedComponents/ETPKI/lib:/opt/CA/CAlib $ echo $ORACLE_HOME /usr/local/oracle/ORA This is how I tried to install the DBD module: Download the file DBD 1.50 for Oracle Copy to /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD Untar and Makefile.PL . Message: Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Configuring DBD::Oracle for perl 5.008008 on linux (x86_64-linux-thread-multi) Remember to actually *READ* the README file! Especially if you have any problems. Installing on a linux, Ver#2.6 Using Oracle in /opt/oracle/product/10.2 DEFINE _SQLPLUS_RELEASE = "1002000400" (CHAR) Oracle version 10.2.0.4 (10.2) Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms64.mk Found /opt/oracle/product/10.2/rdbms/lib/ins_rdbms.mk Using /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Your LD_LIBRARY_PATH env var is set to '/usr/local/oracle/ORA/lib:/usr/dt/lib:/usr/openwin/lib:/usr/local/oracle/ORA/ows/cartx/wodbc/1.0/util/lib:/usr/local/oracle/ORA/lib:/usr/local/sybase/OCS-12_0/lib:/usr/local/sybase/lib:/home/oracle/jdbc/jdbcoci73/lib:./' WARNING: Your LD_LIBRARY_PATH env var doesn't include '/opt/oracle/product/10.2/lib' but probably needs to. Reading /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Reading /usr/local/oracle/ORA/rdbms/lib/env_rdbms.mk Attempting to discover Oracle OCI build rules sh: make: command not found by executing: [make -f /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk build ECHODO=echo ECHO=echo GENCLNTSH='echo genclntsh' CC=true OPTIMIZE= CCFLAGS= EXE=DBD_ORA_EXE OBJS=DBD_ORA_OBJ.o] WARNING: Oracle build rule discovery failed (32512) Add path to make command into your PATH environment variable. Oracle oci build prolog: [sh: make: command not found] Oracle oci build command: [] WARNING: Unable to interpret Oracle build commands from /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk. (Will continue by using fallback approach.) Please report this to [email protected]. See README for what to include. Found header files in /opt/oracle/product/10.2/rdbms/public. client_version=10.2 DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\"10.2.0.4\" -DORA_OCI_102 Checking for functioning wait.ph System: perl5.008008 linux ca-build9.us.oracle.com 2.6.20-1.3002.fc6xen #1 smp thu apr 30 18:08:39 pdt 2009 x86_64 x86_64 x86_64 gnulinux Compiler: gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm Linker: not found Sysliblist: -ldl -lm -lpthread -lnsl -lirc Oracle makefiles would have used these definitions but we override them: CC: cc CFLAGS: $(GFLAG) $(OPTIMIZE) $(CDEBUG) $(CCFLAGS) $(PFLAGS)\ $(SHARED_CFLAG) $(USRFLAGS) [$(GFLAG) -O3 $(CDEBUG) -m32 $(TRIGRAPHS_CCFLAGS) -fPIC -I/usr/local/oracle/ORA/rdbms/demo -I/usr/local/oracle/ORA/rdbms/public -I/usr/local/oracle/ORA/plsql/public -I/usr/local/oracle/ORA/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -fno-strict-aliasing $(LPFLAGS) $(USRFLAGS)] build: $(CC) $(ORALIBPATH) -o $(EXE) $(OBJS) $(OCISHAREDLIBS) [ cc -L$(LIBHOME) -L/usr/local/oracle/ORA/rdbms/lib/ -o $(EXE) $(OBJS) -lclntsh $(EXPDLIBS) $(EXOSLIBS) -ldl -lm -lpthread -lnsl -lirc -ldl -lm $(USRLIBS) -lpthread] LDFLAGS: $(LDFLAGS32) [-m32 -o $@ -L/usr/local/oracle/ORA/rdbms//lib32/ -L/usr/local/oracle/ORA/lib32/ -L/usr/local/oracle/ORA/lib32/stubs/] Linking with /usr/local/oracle/ORA/rdbms/lib/defopt.o -lclntsh -ldl -lm -lpthread -lnsl -lirc -ldl -lm -lpthread [from $(DEF_OPT) $(OCISHAREDLIBS)] Checking if your kit is complete... Looks good LD_RUN_PATH=/usr/local/oracle/ORA/lib Using DBD::Oracle 1.50. Using DBD::Oracle 1.50. Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Writing Makefile for DBD::Oracle Writing MYMETA.yml and MYMETA.json *** If you have problems... read all the log printed above, and the README and README.help.txt files. (Of course, you have read README by now anyway, haven't you?)

    Read the article

  • Android Dev Help: Saving an image from Res/raw or Asset folder to the Sd card

    - by Lucy
    Android Development Query Hello, I wonder if anyone could help me, i am trying to save an image (jpg or png) from the res/raw or assets folder to the SD card location (/sdcard/DCIM/). I have been following a tutorial which can save an image from a URL to the SD card Root, but i have looked everywhere to be able to save from res/raw or asset folder instead, and to a differnet location onthe sd card /sdcard/DCIM/ Here is the code, can anyone show me how to do the above from this? Thanks Lucy public class home extends Activity { private File file; private String imgNumber; private Button btnDownload; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); btnDownload=(Button)findViewById(R.id.btnDownload); btnDownload.setOnClickListener(new OnClickListener() { public void onClick(View v) { btnDownload.setText("Download is in Progress."); String savedFilePath=Download("http://www.domain.com/android1.png"); Toast.makeText(getApplicationContext(), "File is Saved in "+savedFilePath, 1000).show(); if(savedFilePath!=null) { btnDownload.setText("Download Completed."); } } }); } public String Download(String Url) { String filepath=null; try { //set the download URL, a url that points to a file on the internet //this is the file to be downloaded URL url = new URL(Url); //create the new connection HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection(); //set up some things on the connection urlConnection.setRequestMethod("GET"); urlConnection.setDoOutput(true); //and connect! urlConnection.connect(); //set the path where we want to save the file //in this case, going to save it on the root directory of the //sd card. File SDCardRoot = Environment.getExternalStorageDirectory(); //create a new file, specifying the path, and the filename //which we want to save the file as. String filename= "download_"+System.currentTimeMillis()+".png"; // you can download to any type of file ex:.jpeg (image) ,.txt(text file),.mp3 (audio file) Log.i("Local filename:",""+filename); file = new File(SDCardRoot,filename); if(file.createNewFile()) { file.createNewFile(); } //this will be used to write the downloaded data into the file we created FileOutputStream fileOutput = new FileOutputStream(file); //this will be used in reading the data from the internet InputStream inputStream = urlConnection.getInputStream(); //this is the total size of the file int totalSize = urlConnection.getContentLength(); //variable to store total downloaded bytes int downloadedSize = 0; //create a buffer... byte[] buffer = new byte[1024]; int bufferLength = 0; //used to store a temporary size of the buffer //now, read through the input buffer and write the contents to the file while ( (bufferLength = inputStream.read(buffer)) > 0 ) { //add the data in the buffer to the file in the file output stream (the file on the sd card fileOutput.write(buffer, 0, bufferLength); //add up the size so we know how much is downloaded downloadedSize += bufferLength; //this is where you would do something to report the prgress, like this maybe Log.i("Progress:","downloadedSize:"+downloadedSize+"totalSize:"+ totalSize) ; btnDownload.setText("download Status:"+downloadedSize+" / "+totalSize); } //close the output stream when done fileOutput.close(); if(downloadedSize==totalSize) filepath=file.getPath(); //catch some possible errors... } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { filepath=null; btnDownload.setText("Internet Connection Failed.\n"+e.getMessage()); e.printStackTrace(); } Log.i("filepath:"," "+filepath) ; return filepath; } }

    Read the article

  • Postfix configuration - Uing virtual min but server is bouncing back my mail.

    - by brodiebrodie
    I have no experience in setting up postfix, and thought virtualmin minght do the legwork for me. Appears not. When I try to send mail to the domain (either [email protected] [email protected] or [email protected]) I get the following message returned This is the mail system at host dedq239.localdomain. I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to <postmaster> If you do so, please include this problem report. You can delete your own text from the attached returned message. The mail system <[email protected]> (expanded from <[email protected]>): User unknown in virtual alias table Final-Recipient: rfc822; [email protected] Original-Recipient: rfc822;[email protected] Action: failed Status: 5.0.0 Diagnostic-Code: X-Postfix; User unknown in virtual alias table How can I diagnose the problem here? It seems that the mail gets to my server but the server fails to locally deliver the message to the correct user. (This is a guess, truthfully I have no idea what is happening). I have checked my virtual alias table and it seems to be set up correctly (I can post if this would be helpful). Can anyone give me a clue as to the next step? Thanks alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no local_recipient_maps = $virtual_mailbox_maps mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination smtpd_sasl_auth_enable = yes soft_bounce = no unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/virtual My mail log file (the last entry) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 207C6B18158: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: from=<[email protected]>, size=1805, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/error[7238]: 207C6B18158: to=<[email protected]>, orig_to=<[email protected]>, relay=none, delay=0.64, delays=0.61/0.01/0/0.02, dsn=5.0.0, status=bounced (User unknown in virtual alias table) Sep 30 15:13:47 dedq239 postfix/cleanup[7237]: 8DC13B18169: message-id=<[email protected]> Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 8DC13B18169: from=<>, size=3691, nrcpt=1 (queue active) Sep 30 15:13:47 dedq239 postfix/bounce[7239]: 207C6B18158: sender non-delivery notification: 8DC13B18169 Sep 30 15:13:47 dedq239 postfix/qmgr[7177]: 207C6B18158: removed Sep 30 15:13:48 dedq239 postfix/smtp[7240]: 8DC13B18169: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[209.85.216.55]:25, delay=1.3, delays=0.02/0.01/0.58/0.75, dsn=2.0.0, status=sent (250 2.0.0 OK 1254348828 36si15082901pxi.91) Sep 30 15:13:48 dedq239 postfix/qmgr[7177]: 8DC13B18169: removed Sep 30 15:14:17 dedq239 postfix/smtpd[7233]: disconnect from mail-bw0-f228.google.com[209.85.218.228] etc.aliases file below I have not touched this file - myvirtualdomain is a replacement for my real domain name # Aliases in this file will NOT be expanded in the header from # Mail, but WILL be visible over networks or from /bin/mail. # # >>>>>>>>>> The program "newaliases" must be run after # >> NOTE >> this file is updated for any changes to # >>>>>>>>>> show through to sendmail. # # Basic system aliases -- these MUST be present. mailer-daemon: postmaster postmaster: root # General redirections for pseudo accounts. bin: root daemon: root adm: root lp: root sync: root shutdown: root halt: root mail: root news: root uucp: root operator: root games: root gopher: root ftp: root nobody: root radiusd: root nut: root dbus: root vcsa: root canna: root wnn: root rpm: root nscd: root pcap: root apache: root webalizer: root dovecot: root fax: root quagga: root radvd: root pvm: root amanda: root privoxy: root ident: root named: root xfs: root gdm: root mailnull: root postgres: root sshd: root smmsp: root postfix: root netdump: root ldap: root squid: root ntp: root mysql: root desktop: root rpcuser: root rpc: root nfsnobody: root ingres: root system: root toor: root manager: root dumper: root abuse: root newsadm: news newsadmin: news usenet: news ftpadm: ftp ftpadmin: ftp ftp-adm: ftp ftp-admin: ftp www: webmaster webmaster: root noc: root security: root hostmaster: root info: postmaster marketing: postmaster sales: postmaster support: postmaster # trap decode to catch security attacks decode: root # Person who should get root's mail #root: marc abuse-myvirtualdomain.com: [email protected] My etc/postfix/virtual file is below - again myvirtualdomain is a replacement. I think this file was generated by Virtualmin and I have tried messing around with is with no success... This is the version without my changes. myunixusername@myvirtualdomain .com myunixusername myvirtualdomain .com myvirtualdomain.com [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Secondary DHCP server won't start on Centos 6.2

    - by Slowjoe
    I'm trying to create a backup DHCP server. Server times are in sync. Primary server starts fine. Secondary server won't start. Error from /var/log/messages is: Sep 15 14:47:45 stream dhcpd: Copyright 2004-2010 Internet Systems Consortium. Sep 15 14:47:45 stream dhcpd: All rights reserved. Sep 15 14:47:45 stream dhcpd: For info, please visit https://www.isc.org/software/dhcp/ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 25: invalid statement in peer declaration Sep 15 14:47:45 stream dhcpd: #011max-response-default Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 41: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 49: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: WARNING: Host declarations are global. They are not limited to the scope you declared them in. Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 70: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 78: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: Configuration file errors encountered -- exiting Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: This version of ISC DHCP is based on the release available Sep 15 14:47:45 stream dhcpd: on ftp.isc.org. Features have been added and other changes Sep 15 14:47:45 stream dhcpd: have been made to the base software release in order to make Sep 15 14:47:45 stream dhcpd: it work better with this distribution. Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: Please report for this software via the CentOS Bugs Database: Sep 15 14:47:45 stream dhcpd: http://bugs.centos.org/ Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: exiting. Config file contents: # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "eng.foo.com"; option domain-name-servers ns0.eng.foo.com, ns1.eng.foo.com; option ntp-servers ntp.eng.foo.com; #option time-servers ntp.eng.foo.com; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; failover peer "dhcp-failover" { secondary; address 10.0.1.70; port 647; peer address 10.0.1.11; peer port 647; max-response-default 30; max-unacked-updates 10; load balance max seconds 3; } # # Management subnet # subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.0.255; option routers 10.0.0.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.0.240 10.0.0.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.0.150 10.0.0.199; deny unknown-clients; } include "/etc/dhcp/dhcpd.conf-engmgmt"; } # # Data subnet # subnet 10.0.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.1.255; option routers 10.0.1.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.1.240 10.0.1.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.1.150 10.0.1.199; deny unknown-clients; } # For centos network installs if substring (option vendor-class-identifier, 0, 8) = "anaconda" { filename "/autohome/distro/ks/"; next-server eng-data.eng.foo.com; } # For PXE network installs if substring (option vendor-class-identifier, 0, 9) = "PXEClient" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } # For KVM PXE network installs if substring (option vendor-class-identifier, 0, 9) = "Etherboot" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } include "/etc/dhcp/dhcpd.conf-engdata"; }

    Read the article

  • Netbeans platform projects - problems with wrapped jar files that have dependencies

    - by I82Much
    For starters, this question is not so much about programming in the NetBeans IDE as developing a NetBeans project (e.g. using the NetBeans Platform framework). I am attempting to use the BeanUtils library to introspect my domain models and provide the properties to display in a property sheet. Sample code: public class MyNode extends AbstractNode implements PropertyChangeListener { private static final PropertyUtilsBean bean = new PropertyUtilsBean(); // snip protected Sheet createSheet() { Sheet sheet = Sheet.createDefault(); Sheet.Set set = Sheet.createPropertiesSet(); APIObject obj = getLookup().lookup (APIObject.class); PropertyDescriptor[] descriptors = bean.getPropertyDescriptors(obj); for (PropertyDescriptor d : descriptors) { Method readMethod = d.getReadMethod(); Method writeMethod = d.getWriteMethod(); Class valueType = d.getClass(); Property p = new PropertySupport.Reflection(obj, valueType, readMethod, writeMethod); set.put(p); } sheet.put(set); return sheet; } I have created a wrapper module around commons-beanutils-1.8.3.jar, and added a dependency on the module in my module containing the above code. Everything compiles fine. When I attempt to run the program and open the property sheet view (i.e.. the above code actually gets run), I get the following error: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:319) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:259) Caused: java.lang.ClassNotFoundException: org.apache.commons.logging.LogFactory starting from ModuleCL@64e48e45[org.apache.commons.beanutils] with possible defining loaders [ModuleCL@75da931b[org.netbeans.libs.commons_logging]] and declared parents [] at org.netbeans.ProxyClassLoader.loadClass(ProxyClassLoader.java:261) at java.lang.ClassLoader.loadClass(ClassLoader.java:254) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:399) Caused: java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory at org.apache.commons.beanutils.PropertyUtilsBean.<init>(PropertyUtilsBean.java:132) at org.myorg.myeditor.MyNode.<clinit>(MyNode.java:35) at org.myorg.myeditor.MyEditor.<init>(MyEditor.java:33) at org.myorg.myeditor.OpenEditorAction.actionPerformed(OpenEditorAction.java:13) at org.openide.awt.AlwaysEnabledAction$1.run(AlwaysEnabledAction.java:139) at org.netbeans.modules.openide.util.ActionsBridge.implPerformAction(ActionsBridge.java:83) at org.netbeans.modules.openide.util.ActionsBridge.doPerformAction(ActionsBridge.java:67) at org.openide.awt.AlwaysEnabledAction.actionPerformed(AlwaysEnabledAction.java:142) at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2028) at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2351) at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387) at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242) at javax.swing.AbstractButton.doClick(AbstractButton.java:389) at com.apple.laf.ScreenMenuItem.actionPerformed(ScreenMenuItem.java:95) at java.awt.MenuItem.processActionEvent(MenuItem.java:627) at java.awt.MenuItem.processEvent(MenuItem.java:586) at java.awt.MenuComponent.dispatchEventImpl(MenuComponent.java:317) at java.awt.MenuComponent.dispatchEvent(MenuComponent.java:305) [catch] at java.awt.EventQueue.dispatchEvent(EventQueue.java:638) at org.netbeans.core.TimableEventQueue.dispatchEvent(TimableEventQueue.java:125) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:296) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:211) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:201) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:196) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:188) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) I understand that beanutils is using the commons-logging component. I have tried adding the commons-logging component in two different ways (creating a wrapper library around the commons-logging library, and putting a dependency on the Commons Logging Integration library). Neither solves the problem. I noticed that the same problem occurs with other wrapped libraries; if they themselves have external dependencies, the ClassNotFoundExceptions propagate like mad, even if I've wrapped the jars of the libraries they require and added them as dependencies to the original wrapped library module. Pictorially: I'm at my wits end here. I noticed similar problems while googling: Is there a known bug on NB Module dependency Same issue I'm facing but when wrapping a different jar NetBeans stance on this - none of the 3 apply to me. None conclusively help me. Thank you, Nick

    Read the article

  • Python bindings for a vala library

    - by celil
    I am trying to create python bindings to a vala library using the following IBM tutorial as a reference. My initial directory has the following two files: test.vala using GLib; namespace Test { public class Test : Object { public int sum(int x, int y) { return x + y; } } } test.override %% headers #include <Python.h> #include "pygobject.h" #include "test.h" %% modulename test %% import gobject.GObject as PyGObject_Type %% ignore-glob *_get_type %% and try to build the python module source test_wrap.c using the following code build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c However, the last command fails with an error $ ./build.sh Traceback (most recent call last): File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1720, in <module> sys.exit(main(sys.argv)) File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1672, in main o = override.Overrides(arg) File "/usr/share/pygobject/2.0/codegen/override.py", line 52, in __init__ self.handle_file(filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 84, in handle_file self.__parse_override(buf, startline, filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 96, in __parse_override command = words[0] IndexError: list index out of range Is this a bug in pygobject, or is something wrong with my setup? What is the best way to call code written in vala from python? EDIT: Removing the extra line fixed the current problem, but now as I proceed to build the python module, I am facing another problem. Adding the following C file to the existing two in the directory: test_module.c #include <Python.h> void test_register_classes (PyObject *d); extern PyMethodDef test_functions[]; DL_EXPORT(void) inittest(void) { PyObject *m, *d; init_pygobject(); m = Py_InitModule("test", test_functions); d = PyModule_GetDict(m); test_register_classes(d); if (PyErr_Occurred ()) { Py_FatalError ("can't initialise module test"); } } and building with the following script build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c CFLAGS="`pkg-config --cflags pygobject-2.0` -I/usr/include/python2.6/ -I." LDFLAGS="`pkg-config --libs pygobject-2.0`" gcc $CFLAGS -fPIC -c test.c gcc $CFLAGS -fPIC -c test_wrap.c gcc $CFLAGS -fPIC -c test_module.c gcc $LDFLAGS -shared test.o test_wrap.o test_module.o -o test.so python -c 'import test; exit()' results in an error: $ ./build.sh ***INFO*** The coverage of global functions is 100.00% (1/1) ***INFO*** The coverage of methods is 100.00% (1/1) ***INFO*** There are no declared virtual proxies. ***INFO*** There are no declared virtual accessors. ***INFO*** There are no declared interface proxies. Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: ./test.so: undefined symbol: init_pygobject Where is the init_pygobject symbol defined? What have I missed linking to?

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

  • Web service using Data Dynamics ActiveReports occasionally slows down

    - by Swoop
    My company is running into a problem with a web service that is written in C#/ASP.Net. The service receives an identity key for data in SQL Server and a path to generate and save a PDF report for this data. In most cases, this web service returns results to the calling web pages very quickly, usually within a few seconds max. However, it seems to occasionally hit a significant slowdown. The web application calling the web service will generate a timeout error when this slowdown occurs. We have checked and the PDF does get created and saved to the server, so it looks like the web service eventually finishes executing. It seems to take about 1 to 2 minutes for processing to have completed. The PDF is generated using ActiveReports from Data Dynamics. Wwhen this problem occurs, making a small change to the web service's config file (ie, adding a blank space to a connection string line) seems to restart the web service and everything is perfectly ok for a period of time afterwards. Other web applications that are running on the same web server do not seem to experience this type of behavior, only this particular web service. I have added the code for the web service below. It is basic calls to 3rd party libraries. We are not able to recreate this problem in test. I am wondering what might be causing this issue? [WebMethod] public string Publish(int identity, string transactionType, string directory, string filename) { try { AdpConnection Conn = new AdpConnection(ConfigurationManager.AppSettings["myDBConnString"]); AdpCommand Cmd = new AdpCommand("storedproc_GetData", oConn); AdpParameter Param; Cmd.CommandType = CommandType.StoredProcedure; Param = Cmd.CreateParameter("@Identity", DbType.Int32); Param.Value = identity; Cmd.Parameters.Add(oParam); Conn.Open(); string aResponse = Cmd.ExecuteScalar().ToString(); Conn.Close(); if (transactionType == "typeA") { //Parse response DataSet dsResponse = ParseDataResponse(aResponse); //dsResponse.WriteXml(@ConfigurationManager.AppSettings["DocsDir"] + identity.ToString() + ".xml"); DataDynamics.ActiveReports.ActiveReport3 rpt = new DataDynamics.ActiveReports.ActiveReport3(); rpt.LoadLayout(@ConfigurationManager.AppSettings["myReportPath"] + "TypeA.rpx"); rpt.AddNamedItem("ReportPath", @ConfigurationManager.AppSettings["myReportPath"]); rpt.AddNamedItem("XMLSTRING", FormatXML(dsResponse.GetXml())); DataDynamics.ActiveReports.DataSources.XMLDataSource xmlds = new DataDynamics.ActiveReports.DataSources.XMLDataSource(); xmlds.FileURL = null; xmlds.RecordsetPattern = "//DataPatternA"; xmlds.LoadXML(FormatXML(dsResponse.GetXml())); if (!System.IO.Directory.Exists(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\")) { System.IO.Directory.CreateDirectory(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\"); } string sXML = FormatXML(dsResponse.GetXml()); StreamWriter sw = new StreamWriter(@ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".xml", false); sw.Write(sXML); sw.Close(); rpt.DataSource = xmlds; rpt.Run(true); DataDynamics.ActiveReports.Export.Pdf.PdfExport xPdf = new DataDynamics.ActiveReports.Export.Pdf.PdfExport(); xPdf.Export(rpt.Document, @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"); } } catch(Exception ex) { return "Error: " + ex.ToString(); } return @ConfigurationManager.AppSettings["DocsDir"] + directory + @"\" + filename + ".pdf"; }

    Read the article

  • Process spawned by exec-maven-plugin blocks the maven process

    - by Arnab Biswas
    I am trying to execute the following scenario using maven : pre-integration-phase : Start a java based application using a main class (using exec-maven-plugin) integration-phase : Run the integration test cases (using maven-failsafe-plugin) post-integration-phase: Stop the application gracefully (using exec-maven-plugin) Here is pom.xml snip: <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <id>launch-myApp</id> <phase>pre-integration-test</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <arguments> <argument>-DMY_APP_HOME=/usr/home/target/local</argument> <argument>-Djava.library.path=/usr/home/other/lib</argument> <argument>-classpath</argument> <classpath/> <argument>com.foo.MyApp</argument> </arguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> <configuration> <forkMode>always</forkMode> </configuration> </plugin> </plugins> If I execute mvn post-integration-test, my application is getting started as a child process of the maven process, but the application process is blocking the maven process from executing the integration tests which comes in the next phase. Later I found that there is a bug (or missing functionality?) in maven exec plugin, because of which the application process blocks the maven process. To address this issue, I have encapsulated the invocation of MyApp.java in a shell script and then appended “/dev/null 2&1 &” to spawn a separate background process. Here is the snip (this is just a snip and not the actual one) from runTest.sh: java - DMY_APP_HOME =$2 com.foo.MyApp > /dev/null 2>&1 & Although this solves my issue, is there any other way to do it? Am I missing any argument for exec-maven-plugin?

    Read the article

  • Silverlight DataGrid not stretching to accommodate all items in data source?

    - by bplus
    I'm having problems getting a Silverlight DataGrid to stretch to accommodate all the items in it's dataSource. I've got a Grid that contains two DataGrids. I've tried setting height=Auto on the Grid and the DataGrids. I've tried setting HorizontalContentAlignment="Stretch" on the Grid and the DataGrids. The object tag has height="100%" I've set the Height="*" on the RowDefinitions for the Grid Any help would be very much appreciated! Here's the code listing: <UserControl x:Class="TimeSheet.SilverLight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" mc:Ignorable="d"> <Grid x:Name="LayoutRoot" Height="Auto" ShowGridLines="True" HorizontalAlignment="Stretch" > <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <local:DataGrid BorderThickness="5" HorizontalContentAlignment="Stretch" AutoGenerateColumns="False" VerticalAlignment="Top" x:Name="NonProjectGrid" Grid.Row="0"> <local:DataGrid.Columns> <local:DataGridTextColumn Header="Activity" Binding="{Binding TaskName}" /> <local:DataGridTextColumn Header="Monday" Binding="{Binding Monday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Tuesday" Binding="{Binding Tuesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Wednesday" Binding="{Binding Wednesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Thursday" Binding="{Binding Thursday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Friday" Binding="{Binding Friday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Saturday" Binding="{Binding Saturday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Sunday" Binding="{Binding Sunday, Mode=TwoWay}" /> </local:DataGrid.Columns> </local:DataGrid> <local:DataGrid BorderThickness="5" HorizontalContentAlignment="Stretch" AutoGenerateColumns="False" VerticalAlignment="Top" x:Name="ProjectGrid" Grid.Row="2"> <local:DataGrid.Columns> <local:DataGridTextColumn Header="Bug Number" Binding="{Binding BugNo}" /> <local:DataGridTextColumn Header="Activity" Binding="{Binding TaskName}" /> <local:DataGridTextColumn Header="Monday" Binding="{Binding Monday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Tuesday" Binding="{Binding Tuesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Wednesday" Binding="{Binding Wednesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Thursday" Binding="{Binding Thursday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Friday" Binding="{Binding Friday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Saturday" Binding="{Binding Saturday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Sunday" Binding="{Binding Sunday, Mode=TwoWay}" /> </local:DataGrid.Columns> </local:DataGrid> <Button Name="AddBugBtn" Width="125" Height="25" Content="Add From Bugzilla" Click="AddBug_Click" Grid.Row="3" HorizontalAlignment="Right"></Button> <Button Name="SaveBtn" Width="125" Height="25" Content="Save" Click="Save_Click" Grid.Row="3" HorizontalAlignment="Left"></Button> </Grid>

    Read the article

  • Can't run my servlet from tomcat server even though the classes are in package

    - by Mido
    Hi there, i am trying to get my servlet to run, i have been searching for 2 days and trying every possible solution and no luck. The servet class is in the appropriate folder (i.e under the package name). I also added the jar files needed in my servlet into lib folder. the web.xml file maps the url and defines the servlet. So i did everything in the documentation and wt people said in here and still getting this error : type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class assign1a.RPCServlet org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:282) org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:357) org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1687) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoClassDefFoundError: assign1a/RPCServlet (wrong name: server/RPCServlet) java.lang.ClassLoader.defineClass1(Native Method) java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) java.lang.ClassLoader.defineClass(ClassLoader.java:616) java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java:2820) org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java:1143) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1638) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1516) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:108) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:558) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:379) org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:282) org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:357) org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:1687) java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) java.lang.Thread.run(Thread.java:619) note The full stack trace of the root cause is available in the Apache Tomcat/7.0.5 logs. Also here is my servlet code : package assign1a; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import lib.jsonrpc.RPCService; public class RPCServlet extends HttpServlet { /** * */ private static final long serialVersionUID = -5274024331393844879L; private static final Logger log = Logger.getLogger(RPCServlet.class.getName()); protected RPCService service = new ServiceImpl(); public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { response.setContentType("text/html"); response.getWriter().write("rpc service " + service.getServiceName() + " is running..."); } public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { try { service.dispatch(request, response); } catch (Throwable t) { log.log(Level.WARNING, t.getMessage(), t); } } } Please help me :) Thanks. EDIT: here are the contents of my web.xml file <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <servlet> <servlet-name>jsonrpc</servlet-name> <servlet-class>assign1a.RPCServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>jsonrpc</servlet-name> <url-pattern>/rpc</url-pattern> </servlet-mapping> </web-app>

    Read the article

  • Trouble recovering MySQL InnoDB database after server crash

    - by Andy Shinn
    I had a server crash due to a broken iSCSI link (the filesystem went into read-only mode). After repairing the link and rebooting the machine (CentOS 5 / MySQL 5.1), the MySQL server would not start and gave the following error: 100603 19:11:46 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100603 19:11:46 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Warning: database page corruption or a failed InnoDB: file read of page 112541. InnoDB: Trying to recover it from the doublewrite buffer. InnoDB: Dump of the page: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): lots of binary data 100603 19:11:46 InnoDB: Page checksum 953720272, prior-to-4.0.14-form checksum 2641912043 InnoDB: stored checksum 617821918, prior-to-4.0.14-form stored checksum 2080617765 InnoDB: Page lsn 115 2632899642, low 4 bytes of lsn at page end 2641594600 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Dump of corresponding page in doublewrite buffer: 100603 19:11:46 InnoDB: Page dump in ascii and hex (16384 bytes): more binary data 100603 19:11:46 InnoDB: Page checksum 908374788, prior-to-4.0.14-form checksum 824841363 InnoDB: stored checksum 912869634, prior-to-4.0.14-form stored checksum 2210927931 InnoDB: Page lsn 115 2635312169, low 4 bytes of lsn at page end 2633173354 InnoDB: Page number (if stored to page already) 112541, InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0 InnoDB: Page may be an index page where index id is 0 616 InnoDB: Also the page in the doublewrite buffer is corrupt. InnoDB: Cannot continue operation. InnoDB: You can try to recover the database with the my.cnf InnoDB: option: InnoDB: innodb_force_recovery=6 100603 19:11:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended Per the error message, I have tried setting set-variable=innodb_force_recovery=6 in the my.cnf to get to the data. This allows the MySQL server to start. But when I try to do a mysqldump of the database or a SELECT * INTO OUTFILE "filename" FROM broken_table; it seems to endlessly just export the same line over and over again. I have also tried http://code.google.com/p/innodb-tools/. But this tool fails with an error that 'blob' type is not supported. If I try to access the data using the PHP application it crashes MySQL: `100603 19:19:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=2 max_threads=151 threads_connected=2 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338317 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x15d33f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x453aff00 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x24) [0x874364] /usr/libexec/mysqld(handle_segfault+0x346) [0x5c9166] /lib64/libpthread.so.0 [0x3a6e40eb10] /usr/libexec/mysqld(rec_get_offsets_func+0x30) [0x7cc310] /usr/libexec/mysqld [0x7674d8] /usr/libexec/mysqld(btr_search_info_update_slow+0x638) [0x768d48] /usr/libexec/mysqld(btr_cur_search_to_nth_level+0xc7d) [0x75f86d] /usr/libexec/mysqld [0x7dd1c1] /usr/libexec/mysqld(row_search_for_mysql+0x18b0) [0x7e03d0] /usr/libexec/mysqld(ha_innobase::general_fetch(unsigned char*, unsigned int, unsigned int)+0x7c) [0x7526fc] /usr/libexec/mysqld(handler::read_multi_range_next(st_key_multi_range**)+0x29) [0x6aed09] /usr/libexec/mysqld(QUICK_RANGE_SELECT::get_next()+0x194) [0x69a964] /usr/libexec/mysqld [0x6aafe9] /usr/libexec/mysqld(sub_select(JOIN*, st_join_table*, bool)+0x56) [0x635196] /usr/libexec/mysqld [0x63f9cd] /usr/libexec/mysqld(JOIN::exec()+0x950) [0x6497c0] /usr/libexec/mysqld(mysql_select(THD*, Item*, TABLE_LIST*, unsigned int, List&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsign ed long long, select_result*, st_select_lex_unit*, st_select_lex*)+0x17b) [0x64b34b] /usr/libexec/mysqld(handle_select(THD*, st_lex*, select_result*, unsigned long)+0x169) [0x64bc79] /usr/libexec/mysqld [0x5d34b6] /usr/libexec/mysqld(mysql_execute_command(THD*)+0x4e5) [0x5d6b45] /usr/libexec/mysqld(mysql_parse(THD*, char const*, unsigned int, char const)+0x211) [0x5dc321] /usr/libexec/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x10b8) [0x5dd3f8] /usr/libexec/mysqld(do_command(THD*)+0xe6) [0x5dd9e6] /usr/libexec/mysqld(handle_one_connection+0x73d) [0x5d036d] /lib64/libpthread.so.0 [0x3a6e40673d] /lib64/libc.so.6(clone+0x6d) [0x3a6d4d3d1d] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x15de5e0 = SELECT DISTINCT count(DISTINCT i.itemid) as rowscount,i.hostid FROM items i WHERE ((i.itemid BETWEEN 000000000000000 AND 0999999 99999999)) AND i.type<9 AND (i.hostid IN (10017,10047,10050,10054,10056,10059,10062,10063,10064,10065,10066,10067,10068,10069,10070,10071,10072,10073,100 74,10075,10076,10077,10078,10079,10080,10081,10082,10084,10088,10089,10090,10091,10092,10093,10094,10095,10096,10097,10098,10099,10100,10101,10102,10103,10 104,10105,10106,10107,10108,10109)) GROUP BY i.hostid thd-thread_id=3 thd-killed=NOT_KILLED The manual page at contains information that should help you find out what is causing the crash. 100603 19:19:19 mysqld_safe Number of processes running now: 0 100603 19:19:19 mysqld_safe mysqld restarted` Before recovering form an older backup as a last resort I am looking for anymore suggestions. Thanks!

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Event receiver on Content Type not triggered on WikiPageLibrary

    - by Ciprian Grosu
    Hello all, I created a new content type for a wiki page library. I added this content type to library by code (the interface did not allow this). Next, I added an event receiver to this content type (on ItemAdded and ItemAdding). My problem is that no event is trrigered. If I add this events directly to the wiki page library all works fine. Is there a limitation/bug/trick ? I looked at the content type attached to the library with SharePoint Manager and in his schema the part for event receiver is missing...I know that there should be something like: <XmlDocuments> <XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/events"> <spe:Receivers xmlns:spe="http://schemas.microsoft.com/sharepoint/events"> <Receiver> <Name> </Name> <Type>1</Type> <SequenceNumber>10000</SequenceNumber> <Assembly>RssFeedWP, Version=1.0.0.0, Culture=neutral, PublicKeyToken=f6722cbeba696def</Assembly> <Class>RssFeedWP.ItemEventReceiver</Class> <Data> </Data> <Filter> </Filter> </Receiver> <Receiver> <Name> </Name> <Type>10001</Type> <SequenceNumber>10000</SequenceNumber> <Assembly>RssFeedWP, Version=1.0.0.0, Culture=neutral, PublicKeyToken=f6722cbeba696def</Assembly> <Class>RssFeedWP.ItemEventReceiver</Class> <Data> </Data> <Filter> </Filter> </Receiver> </spe:Receivers> </XmlDocument> If I look with SPM to the content type added to site I see this part into schema. Here is my code: public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb web = (SPWeb)properties.Feature.Parent) { // create RssWiki content type SPContentType rssFeedContentType = new SPContentType(web.AvailableContentTypes["Wiki Page"], web.ContentTypes, "RssFeed Wiki Page"); // add rssfeed url field to the new content type AddFieldToContentType(web, rssFeedContentType, "RssFeed Url", SPFieldType.Note); // add use xslt check box field to the new content type AddFieldToContentType(web, rssFeedContentType, "Use Xslt", SPFieldType.Boolean); // add xslt url field to the new content type AddFieldToContentType(web, rssFeedContentType, "Xslt Url", SPFieldType.Note); web.ContentTypes.Add(rssFeedContentType); rssFeedContentType.Update(); web.Update(); AddContentTypeToList(web, rssFeedContentType); AddEventReceiversToCT(rssFeedContentType); //AddEventReceiverToList(web); } } private void AddFieldToContentType(SPWeb web, SPContentType ct, string fieldName, SPFieldType fieldType) { SPField rssUrlField = null; try { rssUrlField = web.Fields.GetField(fieldName); } catch (Exception ex) { if (rssUrlField == null) { web.Fields.Add(fieldName, fieldType, false); } } SPFieldLink rssUrlFieldLink = new SPFieldLink(web.Fields[fieldName]); ct.FieldLinks.Add(rssUrlFieldLink); } private static void AddContentTypeToList(SPWeb web, SPContentType ct) { SPList wikiList = web.Lists[listName]; wikiList.ContentTypesEnabled = true; wikiList.ContentTypes.Add(ct); wikiList.Update(); } private static void AddEventReceiversToCT(SPContentType ct) { //add event receivers string assemblyName = System.Reflection.Assembly.GetExecutingAssembly().FullName; string ctReceiverName = "RssFeedWP.ItemEventReceiver"; ct.EventReceivers.Add(SPEventReceiverType.ItemAdding, assemblyName, ctReceiverName); ct.EventReceivers.Add(SPEventReceiverType.ItemAdded, assemblyName, ctReceiverName); ct.Update(); } Thx !

    Read the article

  • Inherited varibles are not reading correctly when using bitwise comparisons

    - by Shawn B
    Hey, I have a few classes set up for a game, with XMapObject as the base, and XEntity, XEnviron, and XItem inheriting it. MapObjects have a number of flags, one of them being MAPOBJECT_SOLID. My problem is, that XEntity is the only class that correctly detects MAPOBJECT_SOLID. Both Items are Environs are always considered solid by the game, regardless of the flag's state. What is important, is that Environs and Item should almost never be solid. Here are the relevent code samples: XMapObject: class XMapObject : public XObject { public: Uint8 MapObjectType,Location[2],MapObjectFlags; XMapObject *NextMapObject,*PrevMapObject; XMapObject(); void CreateMapObject(Uint8 MapObjectType); void SpawnMapObject(Uint8 MapObjectLocation[2]); void RemoveMapObject(); void DeleteMapObject(); void MapObjectSetLocation(Uint8 Y,Uint8 X); void MapObjectMapLink(); void MapObjectMapUnlink(); }; XEntity: class XEntity : public XMapObject { public: Uint8 Health,EntityFlags; float Speed,Time; XEntity *NextEntity,*PrevEntity; XItem *IventoryList; XEntity(); void CreateEntity(Uint8 EntityType,Uint8 EntityLocation[2]); void DeleteEntity(); void EntityLink(); void EntityUnlink(); Uint8 MoveEntity(Uint8 YOffset,Uint8 XOffset); }; XEnviron: class XEnviron : public XMapObject { public: Uint8 Effect,TimeOut; void CreateEnviron(Uint8 Type,Uint8 Y,Uint8 X,Uint8 TimeOut); }; XItem: class XItem : public XMapObject { public: void CreateItem(Uint8 Type,Uint8 Y,Uint8 X); }; And lastly, the entity move code. Only entities are capable of moving themselves. Uint8 XEntity::MoveEntity(Uint8 YOffset,Uint8 XOffset) { Uint8 NewY = Location[0] + YOffset, NewX = Location[1] + XOffset; if((NewY >= 0 && NewY < MAPY) && (NewX >= 0 && NewX < MAPX)) { XTile *Tile = GetTile(NewY,NewX); if(Tile->MapList != NULL) { XMapObject *MapObject = Tile->MapList; while(MapObject != NULL) { if(MapObject->MapObjectFlags & MAPOBJECT_SOLID) { printf("solid\n"); return 0; } MapObject = MapObject->NextMapObject; } } if(Tile->Flags & TILE_SOLID && EntityFlags & ENTITY_CLIPPING) { return 0; } this->MapObjectSetLocation(NewY,NewX); return 1; } return 0; } What is wierd, is that the bitwise operator always returns true when the MapObject is an Environ or an Item, but it works correctly for Entities. For debug I am using the printf "Solid", and also a printf containing the value of the flag for both Environs and Items. Any help is greatly appreciated, as this is a major bug for the small game I am working on.

    Read the article

  • wmi not available for some time after reboot

    - by Alex Okrushko
    I'm having the problem with the WMI availability on logon. Right after reboot I open cmd and with python interpreter: >>> import wmi >>> c = wmi.WMI() >>> c.Win32_OperatingSystem() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\wmi.py", line 1147, in __getattr__ return getattr (self._namespace, attribute) File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 516, in __getattr__ raise AttributeError("%s.%s" % (self._username_, attr)) AttributeError: winmgmts:.Win32_OperatingSystem >>> 5 minutes later I open another cmd and python interpreter: Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> import wmi >>> c = wmi.WMI() >>> c.Win32_OperatingSystem() [<_wmi_object: \\W520-ALEX-WIN7\root\cimv2:Win32_OperatingSystem=@>] >>> NOTE: the first cmd still keeps saying AttributeError even 5 minutes later. NOTE 2: if I logout and login wmi is available, so it is somehow effected by reboot with process explorer I check the environmental variables and they are the same for both cmds What could that be? Please help. UPDATE: Apparently the problem is connecting to the wbem services: >>> import win32com.client >>> win32com.client.Dispatch('WbemScripting.SWbemLocator') <COMObject WbemScripting.SWbemLocator> >>> wmi_service= win32com.client.Dispatch('WbemScripting.SWbemLocator') >>> wbem_service = wmi_service.ConnectServer('.','root/cimv2') >>> wbem_service <COMObject <unknown>> >>> items = wbem_service.ExecQuery('Select * from Win32_OperatingSystem') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<COMObject <unknown>>", line 3, in ExecQuery File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 282, in _ApplyTypes_ result = self._oleobj_.InvokeTypes(*(dispid, LCID, wFlags, retType, argTypes ) + args) pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, u'SWbemServicesEx ', u'Generic failure ', None, 0, -2147217407), None) >>> NOTE 3: wmic os always worked. NOTE 4: re-installing pywin32 package didn't help. Neither did Re-registering/re-compiling the WMI components and resetting of the WMI database (as recommended here) NOTE 5: my 4 Other laptops don't have this problem. Also wmiprov.log has: (Mon Oct 29 11:40:07 2012.248587) : *************************************** (Mon Oct 29 11:40:07 2012.248587) : Could not get pointer to binary resource for file: (Mon Oct 29 11:40:07 2012.248587) : C:\Windows\system32\drivers\ndis.sys[MofResourceName](Mon Oct 29 11:40:07 2012.248587) : (Mon Oct 29 11:40:07 2012.248587) : *************************************** (Mon Oct 29 11:40:07 2012.248587) : *************************************** (Mon Oct 29 11:40:07 2012.248587) : Could not get pointer to binary resource for file: (Mon Oct 29 11:40:07 2012.248587) : C:\Windows\system32\drivers\en-US\ndis.sys.mui[MofResourceName](Mon Oct 29 11:40:07 2012.248587) : (Mon Oct 29 11:40:07 2012.248587) : *************************************** (Mon Oct 29 11:40:07 2012.248603) : *************************************** (Mon Oct 29 11:40:07 2012.248603) : Could not get pointer to binary resource for file: (Mon Oct 29 11:40:07 2012.248603) : C:\Windows\system32\DRIVERS\wmiacpi.sys[MofResource](Mon Oct 29 11:40:07 2012.248603) : (Mon Oct 29 11:40:07 2012.248603) : *************************************** (Mon Oct 29 11:40:07 2012.248603) : *************************************** (Mon Oct 29 11:40:07 2012.248603) : Could not get pointer to binary resource for file: (Mon Oct 29 11:40:07 2012.248603) : C:\Windows\system32\DRIVERS\monitor.sys[MonitorWMI](Mon Oct 29 11:40:07 2012.248603) : (Mon Oct 29 11:40:07 2012.248603) : *************************************** NOTE 6: the WMIDiag tool report is at my dropbox

    Read the article

  • IE attachEvent on object tag causes memory corruption

    - by larswa
    I've an ActiveX Control within an embedded IE8 HTML page that has the following event MessageReceived([in] BSTR srcWindowId, [in] BSTR json). On Windows the event is registered with OCX.attachEvent("MessageReceived", onMessageReceivedFunc). Following code fires the event in the HTML page. HRESULT Fire_MessageReceived(BSTR id, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; CComVariant* pvars = new CComVariant[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1] = id; pvars[0] = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; // -> Memory Corruption here! return varResult.scode; } After I enabled gflags.exe with application verifier, the following strange behaviour occur: After Invoke() that is executing the JavaScript callback, the BSTR from pvars[1] is copied to pvars[0] for some unknown reason!? The delete[] of pvars causes a double free of the same string then which ends in a heap corruption. Does anybody has an idea whats going on here? Is this a IE bug or is there a trick within the OCX Implementation that I'm missing? If I use the tag like: <script for="OCX" event="MessageReceived(id, json)" language="JavaScript" type="text/javascript"> window.onMessageReceivedFunc(windowId, json); </script> ... the strange copy operation does not occur. The following code also seem to be ok due to the fact that the caller of Fire_MessageReceived() is responsible for freeing the BSTRs. HRESULT Fire_MessageReceived(BSTR srcWindowId, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; VARIANT pvars[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1].vt = VT_BSTR; pvars[1].bstrVal = srcWindowId; pvars[0].vt = VT_BSTR; pvars[0].bstrVal = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; return varResult.scode; } Thanks!

    Read the article

  • Xcode/iPhone Development 6 months in - Annoyances

    - by clearbrian
    Hi I've been iPhone programming for 6 months and come from a PC/Java/Eclipse background and still have a few annoyances with Xcode/iPhone programming I wonder are there any shortcuts to. Is there any way to prevent multiple windows opening all the time in XCode? a) When you click on the Errors/Warnings in the bottom right of the status bar build errors are shown in separate window. Any way to get these to show in the main editor? b) Anyway to get debugger to appear in main editor. I have a big screen iMac and it's still window hell on Macs. When you come from Alt-Tab the Mac is a nightmare. 2) Anyway to get a toolbar item on the main editor to: a) Open Console (I know CMD-thingy-R) b) Open Break points (you have to open Debugger first then breakpoints) I know there's keyboard shortcuts but I have only left hand free others on the trackball so any keys on right hand side of keyboard are too far. I know you can add Finder toolbar scripts (just wondering if anyway to extend Xcode). Are there utilities to extend Xcode? Scripts/Automator/Any Services I can setup to help. Can you automate Xcode like you can with Windows/ActiveX/VBA 3) Limit lookups using CMD + double click. If I double click on a variable to find its definition using CMD + double click it shows every occurrence of all variables with that name. (annoying it you name all you maps mapView) Anyway to get it to limit to the current class or at least order so current class is first. 4) Find doesn't seem to loop backwards if result all above cursor position I'm in a class and I hit CMD + F for find. Find box appears. I enter some text hit return. It says I have x matches but only back arrow is highlight in Find But when I hit < it does nothing. I need to scroll to the top and redo the search. If the text is both forwards and backwards then both < are highlighted and it works. is this a bug or a 'feature' Missing Eclipse features I have been looking at the User Script menu but was wondering how powerful they are? 5) any scripts around to generate source from members such as description: @property @synthesize if I add a new member, run a script will generate @property/@syntesize and release in dealloc 7) any good sites for scripts? SCM Im having problems with SCM and Folders on HD under project Classes directory. You get a library e.g. JSON. It usually comes as a folder. You copy it to the /Classes for your project. /Classes/JSON I create a Group for the Library in Xcode under Classes group. Classes JSON I drag the files from the folder into xcode into the JSON Group. I add them to the SCM and icon changes from ? to A but if I try and commit them it say folder /JSON is not under SCM. Can you drag a folder into Xcode so that it AND its files get included in SCM? Anyway to stop Xcode Help from being on top all the time. I keep feeling like punching it and telling it to get out of the way! :) I dont mind it open just not in the way once I've finished. Yes I know I can Ctrl-W Sites: the main site I use to learn Obj-C are : stackoverflow.com Google code Search - tonnes of full apps on here http://www.iphonedevsdk.com/forum/iphone-sdk-development/ Apple Developers Forums (anyway to get RSS feed to these or is that blasphemy :) ) Safari - 100s of IT book though prob too many to keep up :) any others? Any site that gives simple examples for Obj-C/ UIKit The docs just show the methods but actual examples (Google code search has helped a lot here)

    Read the article

  • TableViewCell autorelease error

    - by iAm
    OK, for two days now i have been trying to solve an error i have inside the cellForRowAtIndex method, let start by saying that i have tracked down the bug to this method, the error is [CFDictionary image] or [Not a Type image] message sent to deallocated instance. I know about the debug flags, NSZombie, MallocStack, and others, they helped me narrow it down to this method and why, but I do not know how to solve besides a redesign of the app UI. SO what am i trying to do, well for this block of code, displays a purchase detail, which contains items, the items are in there own section, now when in edit mode, there appears a cell at the bottom of the items section with a label of "Add new Item", and this button will present a modal view of the add item controller, item is added and the view returns to the purchase detail screen, with the just added item in the section just above the "add new Item" cell, the problem happens when i scroll the item section off screen and back into view the app crashes with EXC_BAD_ACCESS, or even if i don't scroll and instead hit the back button on the navBar, still the same error. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = nil; switch (indexPath.section) { case PURCHASE_SECTION: { static NSString *cellID = @"GenericCell"; cell = [tableView dequeueReusableCellWithIdentifier:cellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue2 reuseIdentifier:cellID] autorelease]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } switch (indexPath.row) { case CATEGORY_ROW: cell.textLabel.text = @"Category:"; cell.detailTextLabel.text = [self.purchase.category valueForKey:@"name"]; cell.accessoryType = UITableViewCellAccessoryNone; cell.editingAccessoryType = UITableViewCellAccessoryDisclosureIndicator; break; case TYPE_ROW: cell.textLabel.text = @"Type:"; cell.detailTextLabel.text = [self.purchase.type valueForKey:@"name"]; cell.accessoryType = UITableViewCellAccessoryNone; cell.editingAccessoryType = UITableViewCellAccessoryDisclosureIndicator; break; case VENDOR_ROW: cell.textLabel.text = @"Payment:"; cell.detailTextLabel.text = [self.purchase.vendor valueForKey:@"name"]; cell.accessoryType = UITableViewCellAccessoryNone; cell.editingAccessoryType = UITableViewCellAccessoryDisclosureIndicator; break; case NOTES_ROW: cell.textLabel.text = @"Notes"; cell.editingAccessoryType = UITableViewCellAccessoryNone; break; default: break; } break; } case ITEMS_SECTION: { NSUInteger itemsCount = [items count]; if (indexPath.row < itemsCount) { static NSString *itemsCellID = @"ItemsCell"; cell = [tableView dequeueReusableCellWithIdentifier:itemsCellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:itemsCellID] autorelease]; cell.accessoryType = UITableViewCellAccessoryNone; } singleItem = [self.items objectAtIndex:indexPath.row]; cell.textLabel.text = singleItem.name; cell.detailTextLabel.text = [singleItem.amount formattedDataDisplay]; cell.imageView.image = [singleItem.image image]; } else { static NSString *AddItemCellID = @"AddItemCell"; cell = [tableView dequeueReusableCellWithIdentifier:AddItemCellID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:AddItemCellID] autorelease]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; } cell.textLabel.text = @"Add Item"; } break; } case LOCATION_SECTION: { static NSString *localID = @"LocationCell"; cell = [tableView dequeueReusableCellWithIdentifier:localID]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:localID] autorelease]; cell.accessoryType = UITableViewCellAccessoryNone; } cell.textLabel.text = @"Purchase Location"; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; cell.editingAccessoryType = UITableViewCellAccessoryNone; break; } default: break; } return cell; } the singleItem is of Modal Type PurchaseItem for core data now that i know what is causing the error, how do i solve it, I have tried everything that i know and some of what i dont know but still, no progress, please any suggestions as to how to solve this without redesign is my goal, perhaps there is an error i am doing that I cannot see, but if it's the nature of autorelease, than i will redesign.

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >