Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 237/670 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • Where is the Mac Divx Web Player 7 cache folder?

    - by user30710
    Until recently, I was using Divx web player 1.4.2 because it seemed to be the least buggy. It was saving files in users/xxxxxx/movies/divx movies/temporary added files and was deleting them when the cache limit was reached. Now with 7, it's saving them alright cause I can watch my HD space go down, but I can't find them. And it's not respecting the cache limit size (mine is 4GB). The only way to clear up this space is a restart of the Mac. I'm running 10.6.8and Chrome. I've looked everywhere for the folder manually. Where is it?

    Read the article

  • Why would you ever set MaxKeepAliveRequests to anything but unlimited?

    - by Jonathon Reinhart
    Apache's KeepAliveTimeout exists to close a keep-alive connection if a new request is not issued within a given period of time. Provided the user does not close his browser/tab, this timeout (usually 5-15 seconds) is what eventually closes most keep-alive connections, and prevents server resources from being wasted by holding on to connections indefinitely. Now the MaxKeepAliveRequests directive puts a limit on the number of HTTP requests that a single TCP connection (left open due to KeepAlive) will serve. Setting this to 0 means an unlimited number of requests are allowed. Why would you ever set this to anything but "unlimited"? Provided a client is still actively making requests, what harm is there in letting them happen on the same keep-alive connection? Once the limit is reached, the requests still come in, just on a new connection. The way I see it, there is no point in ever limiting this. What am I missing?

    Read the article

  • How does heap compaction work quickly?

    - by Mason Wheeler
    They say that compacting garbage collectors are faster than traditional memory management because they only have to collect live objects, and by rearranging them in memory so everything's in one contiguous block, you end up with no heap fragmentation. But how can that be done quickly? It seems to me that that's equivalent to the bin-packing problem, which is NP-hard and can't be completed in a reasonable amount of time on a large dataset within the current limits of our knowledge about computation. What am I missing?

    Read the article

  • Failed to Upload a File

    - by CrazyNick
    User 'X' is the site-collection owner. He tries to upload a 500kb file into a document library, got the error "The server has aborted your upload. The files selected may exceed the server's upload size limit. If you are transfering a large group of files, try uploading fewer at a time." however web-application owners are able to upload the file. what would be the issue, any thoughts? Upload size limit for a file – 5 MB Site Quota template set – 50 MB Used Site Quota – 10 MB

    Read the article

  • WCF streaming files

    - by Pinu
    I need to pass a memory stream to the WCF server , how do i need to add this data type in my data contract. I will eventually need to convert this to a memory stream and pass it on to my service layer. datacontact[DataMember] Stream str = null; public Stream File { get { return str; } set { str = value; } }

    Read the article

  • question about sorting

    - by skydoor
    Bubble sort is O(n) at best, O(n^2) at worst, and its memory usage is O(1) . Merge sort is always O(n log n), but its memory usage is O(n). Which algorithm we would use to implement a function that takes an array of integers and returns the max integer in the collection, assuming that the length of the array is less than 1000. What if the array length is greater than 1000?

    Read the article

  • OpenVPN bandwith restrictrictions and cpu power needed

    - by user197664
    In Open VPN is there a way to set a maximum limit of data and speed per user? Say user "reptar' is abusing the VPN and I wanted to limit his/her speeds and/or data how would one go about doing this? Would I need to know the IP address of the abuser? Also, I have seen articles around the internet about turing a Rasberry PI in to a VPN server. If I did such a thing how many users would this device be able to handle at a given time? I believe it runs at 512 gb and clocks at around 700 mhz.

    Read the article

  • Google Search Appliance: Limiting Number of Results

    - by senfo
    I am attempting to limit the number of results that are displayed as a result of dynamic result clustering on the Google Search Appliance. I've looked through the XSLT, but I've only come across the following two user-modifiable options: <!-- *** dyanmic result cluster options *** --> <xsl:variable name="show_res_clusters">1</xsl:variable> <xsl:variable name="res_cluster_position">right</xsl:variable> Are there more options that I'm unaware of that I could use to limit the results? Is there another way that I'm missing?

    Read the article

  • Suggestion for a Data Structure!

    - by Jay
    I have the following requirements for a data structure: Direct access to an element with the help of a key (Key will be an integer, range is also same as integer range) Avoid memory allocation in chunks (Allocate contigous memory for the data structure including the data) Should be able to grow the data structure size dynamically Which data structure would you suggest? Any pointers in the direction will also be of help.

    Read the article

  • $HTTP_HOST multiple rewrite

    - by nrivoli
    I have Apache 2.2, Ubuntu 12. I want to load a different envionment based on my HTTP_HOST, this works for the first domain only, after that I get error 500, " Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace." and I am not able to see my error... RewriteEngine on RewriteBase / RewriteCond %{HTTPS} off RewriteRule ^ - [F] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{HTTP_HOST} ^server-dev.domain.tld$ [NC] RewriteRule ^(.*)$ /api/dev.php/$1 [L] RewriteCond %{HTTP_HOST} ^server-qa.domain.tld$ [NC] RewriteRule ^(.*)$ /api/qa.php/$1 [L] #RewriteCond %{HTTP_HOST} ^localhost$ [NC] #RewriteRule ^(.*)$ /api/localhost.php/$1 [L] I am accesing to https://server-dev.domain.tl/api/whatever https://server-qa.domain.tl/api/whatever

    Read the article

  • How to open DataSet in Visual Studio 2008 faster?

    - by Ekkapop
    When I open DataSet in Visual Studio 2008 to design or modify it, it always take a very long time (more than five minutes) before I can continue to do my job. While I'm waiting I can't do anything on Visual Studio, moreover CPU and memory usage is growth dramatically. I want to know, Is it has anyway to reduce this waiting time? Hardware - Desktop CPU: Intel Q6600 Memory: 4 GB HDD: 320 GB 7200 rpm OS: Windows XP 32 bit with Service Pack 3

    Read the article

  • Query doesn't use a covering-index when applicable

    - by Dor
    I've downloaded the employees database and executed some queries for benchmarking purposes. Then I noticed that one query didn't use a covering index, although there was a corresponding index that I created earlier. Only when I added a FORCE INDEX clause to the query, it used a covering index. I've uploaded two files, one is the executed SQL queries and the other is the results. Can you tell why the query uses a covering-index only when a FORCE INDEX clause is added? The EXPLAIN shows that in both cases, the index dept_no_from_date_idx is being used anyway. To adapt myself to the standards of SO, I'm also writing the content of the two files here: The SQL queries: USE employees; /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date); /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp"; DESCRIBE dept_emp; SHOW KEYS IN dept_emp; /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`); The results: -------------- /* Creating an index for an index-covered query */ CREATE INDEX dept_no_from_date_idx ON dept_emp (dept_no, from_date) -------------- Query OK, 331603 rows affected (33.95 sec) Records: 331603 Duplicates: 0 Warnings: 0 -------------- /* Show `dept_emp` table structure, indexes and generic data */ SHOW TABLE STATUS LIKE "dept_emp" -------------- +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ | dept_emp | InnoDB | 10 | Compact | 331883 | 36 | 12075008 | 0 | 21544960 | 29360128 | NULL | 2010-05-04 13:07:49 | NULL | NULL | utf8_general_ci | NULL | | | +----------+--------+---------+------------+--------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+-----------------+----------+----------------+---------+ 1 row in set (0.47 sec) -------------- DESCRIBE dept_emp -------------- +-----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+-------+ | emp_no | int(11) | NO | PRI | NULL | | | dept_no | char(4) | NO | PRI | NULL | | | from_date | date | NO | | NULL | | | to_date | date | NO | | NULL | | +-----------+---------+------+-----+---------+-------+ 4 rows in set (0.05 sec) -------------- SHOW KEYS IN dept_emp -------------- +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | dept_emp | 0 | PRIMARY | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 0 | PRIMARY | 2 | dept_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | emp_no | 1 | emp_no | A | 331883 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no | 1 | dept_no | A | 7 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 1 | dept_no | A | 13 | NULL | NULL | | BTREE | | | dept_emp | 1 | dept_no_from_date_idx | 2 | from_date | A | 165941 | NULL | NULL | | BTREE | | +----------+------------+-----------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ 6 rows in set (0.23 sec) -------------- /* The EXPLAIN shows that the subquery doesn't use a covering-index */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery should use a covering index, but isn't */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no,dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 21402 | Using where | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+-------------+ 3 rows in set (0.09 sec) -------------- /* The EXPLAIN shows that the subquery DOES use a covering-index, thanks to the FORCE INDEX clause */ EXPLAIN SELECT SQL_NO_CACHE * FROM dept_emp INNER JOIN ( /* The subquery use a covering index */ SELECT SQL_NO_CACHE emp_no, dept_no FROM dept_emp FORCE INDEX(dept_no_from_date_idx) WHERE dept_no="d001" ORDER BY from_date DESC LIMIT 20000,50 ) AS `der` USING (`emp_no`, `dept_no`) -------------- +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 50 | | | 1 | PRIMARY | dept_emp | eq_ref | PRIMARY,emp_no,dept_no,dept_no_from_date_idx | PRIMARY | 16 | der.emp_no,der.dept_no | 1 | | | 2 | DERIVED | dept_emp | ref | dept_no_from_date_idx | dept_no_from_date_idx | 12 | | 37468 | Using where; Using index | +----+-------------+------------+--------+----------------------------------------------+-----------------------+---------+------------------------+-------+--------------------------+ 3 rows in set (0.05 sec) Bye

    Read the article

  • one high-end server with one Application Server or multiple Application Servers?

    - by elgcom
    If I have a high-end server, for example with 1T memory and 8x4core CPU... will it bring more performance if I run multiple App Server (on different JVM) rather than just one App Server? On App Server I will run some services (EAR whith message driven beans) which exchange message with each other. btw, has java 64bit now no memory limitation any more? http://java.sun.com/products/hotspot/whitepaper.html#64

    Read the article

  • C Programming - My program is good enough for my assignment but I know its not good

    - by Joe
    Hi there I'm just starting an assignment for uni and it's raised a question for me. I don't understand how to return a string from a function without having a memory leak. char* trim(char* line) { int start = 0; int end = strlen(line) - 1; /* find the start position of the string */ while(isspace(line[start]) != 0) { start++; } //printf("start is %d\n", start); /* find the position end of the string */ while(isspace(line[end]) != 0) { end--; } //printf("end is %d\n", end); /* calculate string length and add 1 for the sentinel */ int len = end - start + 2; /* initialise char array to len and read in characters */ int i; char* trimmed = calloc(sizeof(char), len); for(i = 0; i < (len - 1); i++) { trimmed[i] = line[start + i]; } trimmed[len - 1] = '\0'; return trimmed; } as you can see I am returning a pointer to char which is an array. I found that if I tried to make the 'trimmed' array by something like: char trimmed[len]; then the compiler would throw up a message saying that a constant was expected on this line. I assume this meant that for some reason you can't use variables as the array length when initialising an array, although something tells me that can't be right. So instead I made my array by allocating some memory to a char pointer. I understand that this function is probably waaaaay sub-optimal for what it is trying to do, but what I really want to know is: 1. Can you normally initialise an array using a variable to declare the length like: char trimmed[len]; ? 2. If I had an array that was of that type (char trimmed[]) would it have the same return type as a pointer to char (ie char*). 3. If I make my array by callocing some memory and allocating it to a char pointer, how do I free this memory. It seems to me that once I have returned this array, I can't access it to free it as it is a local variable. Many thanks in advance Joe

    Read the article

  • Tuning MySQL to take advantage of a 4GB VPS

    - by alistair.mp
    Hello, We're running a large site at the moment which has a dedicated VPS for it's database server which is running MySQL and nothing else. At the moment all four CPU cores are running at close to 100% all of the time but the memory usage sticks at around 268MB out of an available 4096MB. I'm wondering what we can do to better utilise the memory and reduce the CPU load by tweaking MySQL's settings? Here is what we currently have in my.cnf: http://pastie.org/private/hxeji9o8n3u9up9mvtinbq Thanks

    Read the article

  • Keep printed documents on Windows Server 2008 R2 Print Server

    - by MadBoy
    I've setup Windows 2008 R2 as print server. I have checked option Keep printed document option for all printers and it works fine. Users print their stuff and i can see what they are doing. Problem is everyone sees all documents that are getting printed which is not always the best idea. Is there a way to: Limit print jobs to be only seen by people who printed them and admins Limit print jobs to be only seen on server (from within Server Manager) and so print jobs dissapear when print job is done from user queue (but then admins are still able to see it and track what's printed and when for reporting purposes). Create some kind of access level list so that some people can see everything geting printed, some people see their print jobs and some people see nothing :-)

    Read the article

  • size of extent on LVM2

    - by piotrek
    in LVM1 there was a limit of 65k extends. So size of extent had to been chosen carefully between wasted space on partitions (to big extent) and maximal possible size of logical volume (too small extent). in lvm2 (according to http://docstore.mik.ua/manuals/hp-ux/en/5992-4589/apa.html) the limit is ~16 million extents. so the default size of 4mb gives ~60TB of LV size. so is there any point in making the extent larger than 4-16mb on a desktop? is there any performance degradation or other costs of having big number of extents?

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • Calling COM Library From XBAP

    - by Benny
    I am trying to call an old COM library from my XBAP and continue to receive the following exception: System.AccessViolationException was unhandled Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. I have tried adding the HKLM value for RunUnrestricted to no avail. I don't get anything else but this error when calling the library. Any ideas? (This library even works from a pure ASP.NET app)

    Read the article

  • How to copy files from shadow copy with long source path

    - by Jake
    The files and folders in my shared network drive (set up with DFS) were mass deleted. Currently I am trying to recover the files from the shadow copy "Previous Version". Problem is, thousands of files are deeply nested with long paths making the file path too long. When copying, it shows the dialog "Source Path Too Long". My guess is that the file path just barely hits the limit when saved into the network drive, but shadaw copy service appends the date and time to the folders so the path character limit is exceeded. How else can I copy the files from shadow copy?

    Read the article

  • Need Recommendations: Network Software and Hardware Setup for small firm

    - by Rogue
    Will be starting a small graphics design firm soon, with 20 employees. Therefore need software to manage the network. Have bought a bulk license of Windows 7. I have a spare computer which can act as a server if necessary, but its an ancient Dell machine (Pentium-III). If required I would purchase an extra machine, but would like to avoid unnecessary costs at start up. Following are the main functions that I would like to perform: Need to monitor\control network traffic and internet usage, restrict access to certain websites Alerts when access to certain software's, and when trying to tamper with privileges Ability to view desktops of any computer at any given time Limit access to certain hardware like USB ports,etc Limit access to folders on the computer Log/Report of all actions including keystrokes performed on any computer Local Network chat and talk client Collaboration and Work logs Any Software available to do all of the above and also any additional hardware required besides network switches, network card's and CAT5e cables. Any other recommendations besides the above mentioned hardware setup

    Read the article

  • What is NSString in struct?

    - by 4thSpace
    I've defined a struct and want to assign one of its values to a NSMutableDictionary. When I try, I get a EXC_BAD_ACCESS. Here is the code: //in .h file typedef struct { NSString *valueOne; NSString *valueTwo; } myStruct; myStruct aStruct; //in .m file - (void)viewDidLoad { [super viewDidLoad]; aStruct.valueOne = @"firstValue"; } //at some later time [myDictionary setValue:aStruct.valueOne forKey:@"key1"]; //dies here with EXC_BAD_ACCESS This is the output in debugger console: (gdb) p aStruct.valueOne $1 = (NSString *) 0xf41850 Is there a way to tell what the value of aStruct.valueOne is? Since it is an NSString, why does the dictionary have such a problem with it? ------------- EDIT ------------- This edit is based on some comments below. The problem appears to be in the struct memory allocation. I have no issues assigning the struct value to the dictionary in viewDidLoad, as mentioned in one of the comments. The problem is that later on, I run into an issue with the struct. Just before the error, I do: po aStruct.oneValue Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x9895cedb in objc_msgSend () The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (_NSPrintForDebugger) will be abandoned. This occurs just before the EXC_BAD_ACCESS: NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"MM-dd-yy_HH-mm-ss-A"]; NSString *date = [formatter stringFromDate:[NSDate date]]; [formatter release]; aStruct.valueOne = date; So the memory issue is most likely in my releasing of formatter. The date var has no retain. Should I instead be doing NSString *date = [[formatter stringFromDate:[NSDate date]] retain]; Which does work but then I'm left with a memory leak.

    Read the article

  • Cisco IOS ACL types

    - by cjavapro
    The built in command help list displays access list types based on which range. router1(config)#access-list ? <1-99> IP standard access list <100-199> IP extended access list <1100-1199> Extended 48-bit MAC address access list <1300-1999> IP standard access list (expanded range) <200-299> Protocol type-code access list <2000-2699> IP extended access list (expanded range) <700-799> 48-bit MAC address access list dynamic-extended Extend the dynamic ACL absolute timer rate-limit Simple rate-limit specific access list router1(config)# What are each of the types? Can multiple types of ACLs be applied to a given interface?

    Read the article

  • iptables logging to diferent file via syslog-ng

    - by rahrahruby
    I have the following configuration in my iptables and syslog files: IPTABLES -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 222 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3306 -j ACCEPT -A INPUT -j DROP -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 SYSLOG-NG destination d_iptables { file("/var/log/iptables/iptables.log"); }; filter f_iptables { facility(kern) and match("IN=" value("MESSAGE")) and match("OUT=" value("MESSAGE")); }; filter f_messages { level(info,notice,warn) and not facility(auth,authpriv,cron,daemon,mail,news) and not filter(f_iptables); }; log { source(s_src); filter(f_iptables); destination(d_iptables); };` I restart syslog-ng and the log is not written.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >