Search Results

Search found 546 results on 22 pages for 'dat chu'.

Page 6/22 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Format External HD on Airport Extreme Base Station?

    - by David.Chu.ca
    I just got an Airport Extreme Base Station. I am planning to use this one replace my current Linksys router. This works fine. My another purpose of getting this device is to use its USB port to connect to several external HDs as my backup Time Machine. I can see one 500GB HD through Airport Base Station. However, I cannot format it by using Disk Utility. What I did is to connect the HD to my iMac directly first and then clean and format the disk by Disk Utility application. Then I move the HD to the Airport Base Station. After that I can designate the HD as my Time Machine destination. Is there any way to format the HD connected to Airport Base Station? Any issues to use an external HD through Airport Base Station for Time Machine backups?

    Read the article

  • Backup SQL server db issue: delete old backup files

    - by David.Chu.ca
    I tried to use sqlmaint.exe tool to back up a database on a remote PC. Here is an example of backup: sqlmaint.exe -S remoteSQLServer\SQLInstance -U username -P pwdxxx -D myDB -BkUpMedia DISK -BkUpDB C:\MSSQL_Backups -DelBkUps 3days ... Here I specified to delete backups older than 3 days. However, the job seems not deleting old bak files on the remote PC(where the SQL server sits). The remote PC has Windows 2008 Server. I also set the C:\MSQL_Backups as shared network drive for EnyOne as owner. My understanding is that the job will delete any bak files older than 3 days. Not sure what I missed? By the way, the job runs at a box with SQL server 2005 installed.

    Read the article

  • How to make addin visible on toolbar by script or batch process?

    - by David.Chu.ca
    I have an addin for Excel 2003. I tried to use MSDN's vbs script to make it installed, see my previous post question. It seems working as addin, but it does not displayed on toolbar when it is first installed on a box. What I have to do is to go to Tools-Add-Ins.. to uncheck my addin, for example myAddin, and then go back to the same dialog window again to check it. I think there must be somewhere Excel has config settings in xml or registry? Any way to run a script to set this addin visible in toolbar?

    Read the article

  • How to add linked Oracle server to SQL 2008 Express?

    - by David.Chu.ca
    I have tried to download Oracle Client 11g both 32 & 64 packages to Windows 2008 R2 with SQL server 2008 Express. However, I still cannot see Oracle provider in SQL server by using sa log in. Not sure if is it possible to do it for SQL Server express 2008? Any advice to do it? I followed installations from this article: Making Linked Server Connection Between SQL Server 64 Bit & Oracle 32 Bit | MS SQL World After installation and reboot the Windows, I still cannot see the Oracle provider in linked provider in SQL server.

    Read the article

  • Any addin or tool to open multiple web sites on one page?

    - by David.Chu.ca
    Is there any addin or tools available for web browser to view multiple web sites on one page or tab? Or is there any web service to allow me to customize one page for multiple web sites? For example, within a page, there are some tools to allow me to define regions like a table or several frames. Then I'll be able to add url to each cell or frames. After settings, I'll be able to view several sites on one tab or page. Not sure if there is any tools available?

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • launching executable from /usr/local/bin needs access to local file

    - by kedmond
    I have an executable, called "octane", that I want to be able to launch globally. The executable requires access to a local binary, "octane.dat", in order to run. I placed the directory containing the executable and the binary in /opt/ as root and created a symbolic link of the executable in /usr/local/bin/. Now, if I type "octane" anywhere, it launches but throws up an error saying it won't run without the binary, "octane.dat". Octane will only launch if my current working directory is the same as the executable and binary, in /usr/local/bin. Any suggestions on how to fix this? Do I have to make that directory global using .bashrc? Thanks.

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • How do I add English-UK thesaurus in LibreOffice?

    - by Chethan S.
    I have been trying to use English-UK thesaurus in LibreOffice but so far cannot get it to work. I have installed mythes-en-us as only it was available in repos. As suggested in some forums I searched for openoffice.org thesaurus, but again US version was available. As a result I can't enable thesaurus when I choose English UK: As instructed elsewhere I tried this: cd /usr/share/myspell/dicts sudo ln -s th_en_US_v2.dat th_en_GB_v2.dat sudo ln -s th_en_US_v2.idx th_en_GB_v2.idx Still the result is same as shown in above screenshots. What can be done now?

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • Preseed values for keyboard-configuration

    - by Nikolai
    I'd like to activate ctrl+alt+backspace on multiple pcs (Ubuntu 14.04) using dpkg-reconfigure keyboard-configuration I don't want to do the dialog every time so i tried: debconf-get-selections | grep "^keyboard-configuration" > config.dat on a host where i answered the dialog with the desired answers On the host where i want to do it non interactively i tried: debconf-set-selections config.dat dpkg-reconfigure -f noninteractive keyboard-configuration it ran without asking, but afterwards the selections where reset to the default values How can i get dpkg to use my predefined values? Or is there another possibility to enable the ctrl+alt+backspace for all users without going through dialogs?

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • file association in Windows 8

    - by Robith Nuriel Haq
    Associating a file to a program should be easy in Windows. However, i find it rather difficult when I'm working on Windows 8. associating a file to a desktop application that I install on my computer is easy because the whole operation is entirely the same with that in the previous Windows releases. What I find rather difficult is associating a file to a Metro apps that I download from the store. So far, I have been using Multimedia 8 (a metro app) to open my video files. However, this app cannot handle particular files-like *.dat-that can be associated easily to desktop video applications, such as media player classic and the like. When I try to associate my DAT files to Multimedia 8, there is indeed a "look for another app on this PC" option at the bottom of the "open with" pop up. But alas, I cannot figure out how to locate my Multimedia 8 app to which I want to associate my DAT file (as well as other video files that are not yet associated to this metro app). If anyone of you knows how to locate those metro apps, please tell me. many thanks

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

  • plot matrix missing points in different color using gnuplot

    - by kitt
    I have a file 'matrix.dat': 1 2 3 4 5 5 - 3 4 5 - 4 5 B - 1 B 2 B 3 - 3 2 - 3 I want to plot numbers using palette, '-' using white color and 'B' using black color. In gnuplot, I use this palette (blue - cyan - green - orange - red): set palette model HSV functions 0.666*(1-gray), 1, 1 And set '-' as missing data: set datafile missing "-" plot 'matrix.dat' matrix with image Now I can only plot numbers and '-' in correct colors.

    Read the article

  • compressed archive with quick access to individual file

    - by eric.frederich
    I need to come up with a file format for new application I am writing. This file will need to hold a bunch other text files which are mostly text but can be other formats as well. Naturally, a compressed tar file seems to fit the bill. The problem is that I want to be able to retrieve some data from the file very quickly and getting just a particular file from a tar.gz file seems to take longer than it should. I am assumeing that this is because it has to decompress the entire file even though I just want one. When I have just a regular uncompressed tar file I can get that data real quick. Lets say the file I need quickly is called data.dat For example the command... tar -x data.dat -zf myfile.tar.gz ... is what takes a lot longer than I'd like. MP3 files have id3 data and jpeg files have exif data that can be read in quickly without opening the entire file. I would like my data.dat file to be available in a similar way. I was thinking that I could leave it uncompressed and seperate from the rest of the files in myfile.tar.gz I could then create a tar file of data.dat and myfile.tar.gz and then hopefully that data would be able to be retrieved faster because it is at the head of outer tar file and is uncompressed. Does this sound right?... putting a compressed tar inside of a tar file? Basically, my need is to have an archive type of file with quick access to one particular file. Tar does this just fine, but I'd also like to have that data compressed and as soon as I do that, I no longer have quick access. Are there other archive formats that will give me that quick access I need? As a side note, this application will be written in Python. If the solution calls for a re-invention of the wheel with my own binary format I am familiar with C and would have no problem writing the Python module in C. Idealy I'd just use tar, dd, cat, gzip, etc though. Thanks, ~Eric

    Read the article

  • how to increase the limit of exceptions in oracle

    - by Arunachalam
    how to increase the limit of exceptions in oracle ? i have a excel sheet in which their are about 900 records to be appended .so i converted the excel to dat file and wrote a batch file that read from the dat file and appends it to the concern table but the batch file stop execution once the exceptions reach 51(all integrity constrain parent key not found) so the remaining valid files are not updated .its very difficult to find which record has integrity constrain is there a way to increase this exception limit ?

    Read the article

  • Remove a layer from a ggplot2 chart

    - by Erik Shilts
    I'd like to remove a layer (in this case the results of geom_ribbon) from a ggplot2 created grid object. Is there a way I can remove it once it's already part of the object? library(ggplot2) dat <- data.frame(x=1:3, y=1:3, ymin=0:2, ymax=2:4) p <- ggplot(dat, aes(x=x, y=y)) + geom_ribbon(aes(ymin=ymin, ymax=ymax), alpha=0.3) + geom_line() # This has the geom_ribbon p # This overlays another ribbon on top p + geom_ribbon(aes(ymin=ymin, ymax=ymax, fill=NA))

    Read the article

  • Non class files with Java Web Start

    - by Michael Munsey
    How do you distribute other files needed by your application that aren't in a jar file? For example, the application at http://www.javabeginner.com/java-swing/java-swing-shuffle-game . The download contains Shuffle.jar, Shuffle.bat, Score.dat, and an images folder with 3 images in it. I can see possibly putting the images directly in Shuffle.jar, but you wouldn't want to put Score.dat in the jar file because it changes. Is there somewhere you could identify this type of file in the jnlp?

    Read the article

  • how to work with datagridview if need show many columns data (approx 1Mio)

    - by ruprog
    is a problem to display data in Datagridview. A large amount of data (stock quotes) data to be displayed from left to right Tell me what to do to display an array of data in datagridviev Public dat As New List(Of act) Public Class act Public time As Date Public price As Integer End Class Sub work() Dim r As New Random For x As Integer = 0 To 1000000 Dim el As New act el.time = Now el.price = r.Next(0, 1000) dat.Add(New act) Next End Sub

    Read the article

  • Is there a way to access Skype IM logs?

    - by eternalmatt
    I tried looking in C:\Users[name]\AppData\Roaming\Skype and I found a folder called "chatsync" with some mysterious folders with files ending in .dat. I'm almost certain that these are the chat logs but I don't know a way to properly open these files (notepad shows gibberish mixed with English words). I'm not trying to do sleuthing, just a project for my girlfriend. So more importantly: how can I read Skype's .dat files properly?

    Read the article

  • Problem in creating Temporary File in android?

    - by Rajapandian
    Hi, In my MediaPlayer application for streaming Video i am using the following code File temp = File.createTempFile("mediaplayertmp", "dat"); while running it throws exceptions like Parent directory of file in not writable:/sdcard/ mediaplayertmp43912.dat I dont how to handle this problem,and i want to know that when we execute that code means where the file will be created.any one knows the solution means please help with some code.

    Read the article

  • Add a row to UITableView for adding new item?

    - by David.Chu.ca
    In order to provide UI for user to add new items to my table view, I would like to add a new row in my table at a specified location (last row for example) when the view is in edit mode (I have a edit button on the view's navigation bar right side). This new row will have a add button indicator on the left side and disclosure accessory arrow on the right. When the view is not in edit mode, this add row should not be displayed. I am not sure if I should overwrite: - (void)setEditing:(BOOL)editing animated:(BOOL)animated{...} where I call the UITableView's method: insertRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation: (UITableViewRowAnimation)animation to insert a new row? My understanding is that this call may add a new row into the table view. The table view's data source is from CoreData storage. Not sure this may cause inconsistent numbers of data in the data store and table view? If it is OK and I have to manage rows in the table view, how can I add left add indicator and left disclosure arrow to the new row? Another question is that if I can do it to insert a new row as Add row, should I remove it when the table view not in edit mode? Just want to know if I am on the right track.

    Read the article

  • Add new item to UITableView and Core Data as data source?

    - by David.Chu.ca
    I have trouble to add new item to my table view with core data. Here is the brief logic in my codes. In my ViewController class, I have a button to trigle the edit mode: - (void) toggleEditing { UITableView *tv = (UITableView *)self.view; if (isEdit) // class level flag for editing { self.newEntity = [NSEntityDescription insertNewObjectForEntityName:@"entity1" inManagedObjectContext:managedObjectContext]; NSArray *insertIndexPaths = [NSArray arrayWithObjects: [NSInextPath indexPathForRow:0 inSection:0], nil]; // empty at beginning so hard code numbers here. [tv insertRowsAtIndexPaths:insertIndexPaths withRowAnimation:UITableViewRowAnimationFade]; [self.tableView setEditing:YES animated:YES]; // enable editing mode } else { ...} } In this block of codes, I added a new item to my current managed object context first, and then I added a new row to my tv. I think that both the number of objects in my data source or context and the number of rows in my table view should be 1. However, I got an exception in the event of tabView:numberOfRowsInSection: Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). The exception was raised right after the delegate event: - (NSInteger) tableView:(UITableView *) tableView numberOfRawsInSection:(NSInteger) section { // fetchedResultsController is class member var NSFetchedResultsController id <NSFechedResultsSectionInfo> sectionInfo = [[fetchedResultsController sections] objectAtIndex: section]; NSInteger rows = [sectionInfo numberOfObjects]; return rows; } In debug mode, I found that the rows was still 0 and the event invoked after the the even of toggleEditing. It looks like that sectionInfo obtained from fetchedResultsController did not include the new entity object inserted. Not sure if I miss anything or steps? I am not sure how it works: to get the fetcedResultsController notified or reflect the change when a new entity is inserted into the current managed object context?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >